diff --git a/src/current/_includes/releases/v22.1/v22.1.0-alpha.1.md b/src/current/_includes/releases/v22.1/v22.1.0-alpha.1.md deleted file mode 100644 index 37037c9ad49..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.0-alpha.1.md +++ /dev/null @@ -1,553 +0,0 @@ -## v22.1.0-alpha.1 - -Release Date: January 24, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Backward-incompatible changes

- -- Using [`SESSION_USER`](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#special-syntax-forms) in a projection or `WHERE` clause now returns the `SESSION_USER` instead of the `CURRENT_USER`. For backward compatibility, use [`session_user()`](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#system-info-functions) for `SESSION_USER` and `current_user()` for `CURRENT_USER`. [#70444][#70444] -- Placeholder values (e.g., `$1`) can no longer be used for role names in [`ALTER ROLE`](https://www.cockroachlabs.com/docs/v22.1/alter-role) statements or for role names in [`CREATE ROLE`](https://www.cockroachlabs.com/docs/v22.1/create-role)/[`DROP ROLE`](https://www.cockroachlabs.com/docs/v22.1/drop-role) statements. [#71498][#71498] - -

Security updates

- -- Authenticated HTTP requests to nodes can now contain additional cookies with the same name as the one CockroachDB uses ("session"). The HTTP spec permits duplicates and will now attempt to parse all cookies with a matching name before giving up. This can resolve issues with running other services on the same domain as your CockroachDB nodes. [#70792][#70792] -- Added a new flag `--external-io-enable-non-admin-implicit-access` that can remove the `admin`-only restriction on interacting with arbitrary network endpoints and using `implicit` auth in operations such as [`BACKUP`](https://www.cockroachlabs.com/docs/v22.1/backup), [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import), or [`EXPORT`](https://www.cockroachlabs.com/docs/v22.1/export). [#71594][#71594] -- When configuring passwords for SQL users, if the client presents the password in cleartext via `ALTER`/`CREATE USER`/`ROLE WITH PASSWORD`, CockroachDB is responsible for hashing this password before storing it. By default, this hashing uses CockroachDB's bespoke `crdb-bcrypt` algorithm, which is based on the standard [bcrypt algorithm](https://wikipedia.org/wiki/Bcrypt). The cost of this hashing function is now configurable via the new [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `server.user_login.password_hashes.default_cost.crdb_bcrypt`. Its default value is `10`, which corresponds to an approximate password check latency of 50-100ms on modern hardware. This value should be increased over time to reflect improvements to CPU performance: the latency should not become so small that it becomes feasible to brute-force passwords via repeated login attempts. Future versions of CockroachDB will likely update the default accordingly. [#74582][#74582] - -

General changes

- -- Non-cancelable [jobs](https://www.cockroachlabs.com/docs/v22.1/show-jobs) now do not fail unless they fail with a permanent error. They retry with exponential backoff if they fail due to a transient error. Furthermore, jobs that perform reverting tasks do not fail. Instead, they are retried with exponential backoff if an error is encountered while reverting. As a result, transient errors do not impact jobs that are reverting. [#69300][#69300] -- CockroachDB now supports exporting operation [traces](https://www.cockroachlabs.com/docs/v22.1/show-trace) to [OpenTelemetry](https://opentelemetry.io/)-compatible tools using the OTLP protocol through the `trace.opentelemetry.collector` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). [#65599][#65599] -- CockroachDB now supports exporting traces to a [Jaeger](https://www.jaegertracing.io/) agent through the new `trace.jaeger.agent` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). Exporting to Jaeger was previously possible by configuring the Jaeger agent to accept Zipkin traces and using the `trace.zipkin.collector ` cluster setting; this configuration is no longer required. [#65599][#65599] -- Support for exporting to Datadog and Lightstep through other interfaces has been retired; these tools can use OpenTelemetry data. The [cluster settings](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `trace.lightstep.token`, `trace.datadog.agent`, and `trace.datadog.project` have been deprecated; they no longer have any effect. [#65599][#65599] -- Tracing transaction commits now includes details about replication. [#72738][#72738] - -

Enterprise edition changes

- -- Updated retryable error warning message to begin with `"WARNING"`. [#70226][#70226] -- Temporary tables are now [restored](https://www.cockroachlabs.com/docs/v22.1/restore) to their original database instead of to `defaultdb` during a [full cluster restore](https://www.cockroachlabs.com/docs/v22.1/restore#full-cluster). Furthermore, `defaultdb` and `postgres` are dropped before a full cluster restore and will only be restored if they are present in the [backup](https://www.cockroachlabs.com/docs/v22.1/backup) being restored. [#71890][#71890] -- [Changefeeds](https://www.cockroachlabs.com/docs/v22.1/changefeed-sinks) now support [GCP Pub/Sub](https://cloud.google.com/pubsub) as a sink. [#72056][#72056] - -

SQL language changes

- -- Added new job control statements allowing an operator to manipulate all jobs of a specific type: ` ALL JOBS`. This is supported in [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v22.1/create-changefeed), [`BACKUP`](https://www.cockroachlabs.com/docs/v22.1/backup), [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import), and [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore) jobs. For example: `PAUSE ALL CHANGEFEED JOBS`. [#69314][#69314] -- [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v22.1/explain-analyze) now contains more information about the [MVCC](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer#mvcc) behavior of operators that scan data from disk. [#64503][#64503] -- Added support for SQL arrays containing JSON for in-memory processing. This does not add support for storing SQL arrays of JSON in tables. [#70041][#70041] -- Placeholder values can now be used as the right-hand operand of the `JSONFetchVal (->)` and `JSONFetchText (->>)` [operators](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#supported-operations) without ambiguity. This argument will be given the text type and the "object field lookup" variant of the operator will be used. [#70066][#70066] -- Fixed `createdb` and `settings` columns for [`pg_catalog` tables](https://www.cockroachlabs.com/docs/v22.1/pg-catalog#data-exposed-by-pg_catalog): `pg_user`, `pg_roles`, and `pg_authid`. [#69609][#69609] -- The `information_schema._pg_truetypid`, `information_schema._pg_truetypmod`, and `information_schema._pg_char_max_length` [built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) are now supported to improve compatibility with PostgreSQL. [#69913][#69913] -- The `pg_my_temp_schema` [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) now properly returns the OID of the active session's temporary schema, if one exists. [#69909][#69909] -- The `pg_is_other_temp_schema` [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) is now supported, which returns whether the given OID is the OID of another session's temporary schema. [#69909][#69909] -- The `information_schema._pg_index_position` [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) is now supported, which improves compatibility with PostgreSQL. [#69911][#69911] -- Extended index scan hints to allow [zigzag joins](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer#zigzag-joins) to be forced. [#67737][#67737] -- `pg_authid.rolesuper`, `pg_roles.rolesuper`, and `pg_user.usesuper` are now true for users/roles that have `admin` role. [#69981][#69981] -- Added a warning that [sequences](https://www.cockroachlabs.com/docs/v22.1/create-sequence) are slower than using [`UUID`](https://www.cockroachlabs.com/docs/v22.1/uuid). [#68964][#68964] -- SQL queries with [`ORDER BY x LIMIT k`](https://www.cockroachlabs.com/docs/v22.1/order-by) clauses may now be transformed to use TopK sort in the query plan if the limit is a constant. Although this affects the output of [`EXPLAIN`](https://www.cockroachlabs.com/docs/v22.1/explain), using TopK in the query plan does not necessarily mean that it is used during execution. [#68140][#68140] -- The `has_tablespace_privilege`, `has_server_privilege`, and `has_foreign_data_wrapper_privilege` [built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) now return [`NULL`](https://www.cockroachlabs.com/docs/v22.1/null-handling) instead of `true` when provided with a non-existed OID reference. This matches the behavior of newer PostgreSQL versions. [#69939][#69939] -- The `pg_has_role` [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) is now supported, which returns whether a given user has privileges for a specified role or not. [#69939][#69939] -- Added the `json_populate_record`, `jsonb_populate_record`, `json_populate_recordset`, and `jsonb_populate_recordset` functions, which transform JSON into row tuples based on the labels in a record type. [#70115][#70115] -- The `enable_drop_enum_value` [session variable](https://www.cockroachlabs.com/docs/v22.1/set-vars#supported-variables) has been removed, along with the corresponding [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). The functionality of being able to drop `enum` values is now enabled automatically. Queries that refer to the session/cluster setting will still work but will have no effect. [#70369][#70369] -- The array [built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) (`array_agg`, `array_cat`, `array_position`, etc.) now operate on record types. [#70332][#70332] -- When an invalid cast to OID is made, a `pgerror` now returns with code `22P02`. This previously threw an assertion error. [#70454][#70454] -- Added the `new_db_name` option to the [`RESTORE DATABASE`](https://www.cockroachlabs.com/docs/v22.1/restore#databases) statement, allowing the user to rename the database they intend to restore. [#70222][#70222] -- Fixed error messaging for [built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) for sequences. Example: `SELECT nextval('@#%@!324234')` correctly returns `relation "@#%@!324234" does not exist` (if the relation doesn't exist) instead of a syntax error. `SELECT currval('')` returns `currval\(\): invalid table name:`. [#70590][#70590] -- It is now possible to cast [JSON](https://www.cockroachlabs.com/docs/v22.1/jsonb) booleans to the `BOOL` type, and to cast JSON numerics with fractions to rounded `INT` types. Error messages are now more clear when a cast from a JSON value to another type fails. [#70522][#70522] -- Added a new SQL [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) `unordered_unique_rowid `, which generates a globally unique 64-bit integer that does not have ordering. [#70338][#70338] -- Added a new [`serial_normalization` case](https://www.cockroachlabs.com/docs/v22.1/set-vars#supported-variables) `unordered_rowid `, which generates a globally unique 64-bit integer that does not have ordering. [#70338][#70338] -- A hint is now provided when using a [`SERIAL4` type](https://www.cockroachlabs.com/docs/v22.1/serial) that gets upgraded to a `SERIAL8` due to the `serial_normalization` session variable requiring an `INT8` to succeed. [#70656][#70656] -- Improved the error message to identify the column and data type when users try to select a named field from an anonymous record that has no labels. [#70726][#70726] -- Implemented `pg_statistic_ext` on [`pg_catalog`](https://www.cockroachlabs.com/docs/v22.1/pg-catalog#data-exposed-by-pg_catalog). [#70591][#70591] -- Implemented `pg_shadow` at [`pg_catalog`](https://www.cockroachlabs.com/docs/v22.1/pg-catalog#data-exposed-by-pg_catalog). [#68255][#68255] -- Disallowed cross-database references for sequences by default. This can be enabled with the [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `sql.cross_db_sequence_references.enabled`. [#70581][#70581] -- Added the ability to comment on SQL table [constraints](https://www.cockroachlabs.com/docs/v22.1/constraints) using PostgreSQL's `COMMENT ON CONSTRAINT` syntax. [#69783][#69783] -- Added a `WITH COMMENT` clause to the [`SHOW CONSTRAINT`](https://www.cockroachlabs.com/docs/v22.1/show-constraints) statement that causes constraint comments to be displayed. [#69783][#69783] -- Added empty stubs for tables and columns. Tables: `pg_statistic`, `pg_statistic_ext_data`, `pg_stats`, `pg_stats_ext`. Columns: `pg_attribute.attmissingval`. [#70865][#70865] -- Previously, the behavior of [casting](https://www.cockroachlabs.com/docs/v22.1/data-types#data-type-conversions-and-casts) an [`INT`](https://www.cockroachlabs.com/docs/v22.1/int) to `CHAR` was similar to `BPCHAR` where only the first digit of the integer was returned. Now casting `INT` to `CHAR` will be interpreted as an ASCII byte, which aligns the overall behavior more with PostgreSQL. [#70942][#70942] -- A parameter of type `CHAR` can now be used as a parameter in a prepared statement. [#70942][#70942] -- The `information_schema._pg_numeric_precision`, `information_schema._pg_numeric_precision_radix`, and `information_schema._pg_numeric_scale` [built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) are now supported, which improves compatibility with PostgreSQL. [#70881][#70881] -- If the time zone is set in a GMT offset, for example `+7` or `-11`, the timezone will be formatted as `<+07>-07` and `<-11>+11` respectively instead of `+7`, `-11`. This most notably shows up when doing [`SHOW TIME ZONE`](https://www.cockroachlabs.com/docs/v22.1/show-vars#supported-variables). [#70716][#70716] -- `NULLS FIRST` and `NULLS LAST` specifiers are now supported for [`ORDER BY`](https://www.cockroachlabs.com/docs/v22.1/order-by). [#71083][#71083] -- Added `SHOW CREATE ALL SCHEMAS` to allow the user to retrieve [`CREATE` statements](https://www.cockroachlabs.com/docs/v22.1/create-schema) to recreate the schemas of the current database. A flat log of the `CREATE` statements for schemas is returned. [#71138][#71138] -- The session variable `inject_retry_errors_enabled` has been added. When this is true, any statement that is a not a [`SET`](https://www.cockroachlabs.com/docs/v22.1/set-vars) statement will return a [transaction retry error](https://www.cockroachlabs.com/docs/v22.1/transaction-retry-error-reference) if it is run inside of an explicit transaction. If the client retries the transaction using the special `cockroach_restart` [`SAVEPOINT`](https://www.cockroachlabs.com/docs/v22.1/savepoint), then after the third error the transaction will proceed as normal. Otherwise, the errors will continue until `inject_retry_errors_enabled` is set to false. The purpose of this setting is to allow users to test their transaction retry logic. [#71357][#71357] -- Arrays of [`ENUM`](https://www.cockroachlabs.com/docs/v22.1/enum) data types can now be compared. [#71427][#71427] -- `NULLS` can be ordered [`NULLS LAST `](https://www.cockroachlabs.com/docs/v22.1/order-by#parameters) by default if the `null_ordered_last` [session variable](https://www.cockroachlabs.com/docs/v22.1/show-vars#supported-variables) is set to true. [#71429][#71429] -- Previously, comparing against [`bytea[]`](https://www.cockroachlabs.com/docs/v22.1/bytes) without a cast (e.g., `SELECT * FROM t WHERE byteaarrcol = '{}'`) would result in an ambiguous error. This has now been resolved. [#71501][#71501] -- Previously, placeholders in an [`ARRAY`](https://www.cockroachlabs.com/docs/v22.1/array) (e.g., `SELECT ARRAY[$1]::int[]`) would resolve in an ambiguous error. This has now been fixed. [#71432][#71432] -- [`EXPLAIN`](https://www.cockroachlabs.com/docs/v22.1/explain) output now displays the limit hint when it is nonzero as part of the `estimated row count` field. [#71299][#71299] -- Implicit casts performed during [`INSERT`](https://www.cockroachlabs.com/docs/v22.1/insert) statements now more closely follow PostgreSQL's behavior. Several minor bugs related to these types of casts have been fixed. [#70722][#70722] -- Newly created tables now have `_pkey` by default as their index/constraint name. [#70604][#70604] -- A newly created [`FOREIGN KEY`](https://www.cockroachlabs.com/docs/v22.1/foreign-key) now has the same constraint name as PostgreSQL— `__fkey`. Previously, this was `fk__ref_`. [#70658][#70658] -- `CURRENT_USER` and `SESSION_USER` can now be used as the role identifier in [`ALTER ROLE`](https://www.cockroachlabs.com/docs/v22.1/alter-role) statements. [#71498][#71498] -- Array [built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) can now be used with arrays of [`ENUM`](https://www.cockroachlabs.com/docs/v22.1/enum). [#71482][#71482] -- Introduced an implicitly defined type for every table, which resolves to a `TUPLE` type that contains all of the columns in the table. [#70100][#70100] -- The [`WITH RECURSIVE`](https://www.cockroachlabs.com/docs/v22.1/common-table-expressions#recursive-common-table-expressions) variant that uses `UNION` (as opposed to `UNION ALL`) is now supported. [#71685][#71685] -- Infinite decimal values can now be encoded when sending data to/from the client. The encoding matches the PostgreSQL encoding. [#71772][#71772] -- Previously, certain [`ENUM`](https://www.cockroachlabs.com/docs/v22.1/enum) built-in functions or operators required an explicit `ENUM` cast. This has been reduced in some cases. [#71653][#71653] -- Removed the [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `sql.defaults.interleaved_tables.enabled` as interleaved support is now fully removed. [#71537][#71537] -- `T_unknown` ParameterTypeOIDs in the PostgreSQL frontend/backend protocol are now correctly handled. [#71971][#71971] -- [String literals](https://www.cockroachlabs.com/docs/v22.1/sql-constants#string-literals) can now be parsed as tuples, either in a cast expression, or in other contexts like function arguments. [#71916][#71916] -- Added the [function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) `crdb_internal.reset_index_usage_stats()` to clear index usage stats. This can be invoked from the SQL shell. [#71896][#71896] -- Custom session options can now be used, i.e., any [session variable](https://www.cockroachlabs.com/docs/v22.1/show-vars) that has `.` in the name. [#71915][#71915] -- Added logic to process an `EXPORT PARQUET` statement. [#71868][#71868] -- Added ability to `EXPORT PARQUET` for relations with `FLOAT`, `INT`, and `STRING` column types. [#71868][#71868] -- This change removes support for: `IMPORT TABLE ... CREATE USING` and `IMPORT TABLE ... DATA`. `` refers to CSV, Delimited, PGCOPY, AVRO. These formats do not define the table schema in the same file as the data. The workaround following this feature removal is to use [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v22.1/create-table) with the same schema that was previously being passed into the [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import) statement, followed by an [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v22.1/import-into) the newly created table. [#71058][#71058] -- Previously, running [`COMMENT ON CONSTRAINT`](https://www.cockroachlabs.com/docs/v22.1/comment-on) on a table in a schema would succeed but the comment would not actually be created. Now the comment is successfully created. [#71985][#71985] -- `INTERLEAVE IN PARENT` is permanently removed from CockroachDB. [#70618][#70618] -- [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v22.1/explain-analyze) now shows maximum allocated memory and maximum SQL temp disk usage for a statement. [#72113][#72113] -- Added `SHOW CREATE ALL TYPES` to allow the user to retrieve the statements to recreate user-defined types of the current database. It returns a flat log of the `CREATE` statements for types. [#71326][#71326] -- It is now possible to swap names (for tables, etc.) in the same transaction. For example: - - ~~~ sql - CREATE TABLE foo(); - BEGIN; - ALTER TABLE foo RENAME TO bar; - CREATE TABLE foo(); - COMMIT; - ~~~ - Previously, the user would receive a "relation ... already exists" error. [#70334][#70334] - -- To align with PostgreSQL, casting an OID type with a value of `0` to a `regtype`, `regproc`, `regclass`, or `regnamespace` now will convert the value to the string `-`. The reverse behavior is implemented too, so a `-` will become `0` if casted to a `reg` OID type. [#71873][#71873] -- Implemented the `date_part` [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) for better compatibility with PostgreSQL. [#72502][#72502] -- `PRIMARY KEY`s have been renamed to conform to PostgreSQL (e.g., `@tbl_col1_col2_pkey`) in this release. To protect certain use cases of backward compatibility, we also allow `@primary` index hints to alias to the `PRIMARY KEY`, but only if no other index is named `primary`. [#72534][#72534] -- Some filesystem-level properties are now exposed in `crdb_internal.kv_store_status`. Note that the particular fields and layout are not stabilized yet. [#72435][#72435] -- Introduced a [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) `crdb_internal.init_stream` and a [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `stream_replication.job_liveness_timeout`. [#72330][#72330] -- A notice is now issued when creating a [foreign key](https://www.cockroachlabs.com/docs/v22.1/foreign-key) referencing a column of a different width. [#72545][#72545] -- Newly created databases will now have the `CONNECT` privilege granted by default to the `PUBLIC` role. [#72595][#72595] -- [SQL Stats metrics](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-statistics) with `*_internal` suffix in their labels are now removed. [#72667][#72667] -- `system.table_statistics` has an additional field, `avgSize`, that is the average size in bytes of the column(s) with `columnIDs`. The new field is visible with the command `SHOW STATISTICS FOR TABLE`, as with other table statistics. This field is not yet used by the [optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer#table-statistics) as part of cost modeling. [#72365][#72365] -- Added the modifier `IF NOT EXISTS` to `ALTER TABLE ... ADD CONSTRAINT IF NOT EXISTS`. [#71257][#71257] -- Fixed [`gateway_region` built-in](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) for `--multitenant` demo clusters. [#72734][#72734] -- Prior to this change it was possible to alter a column's type in a way that was not compatible with the [`DEFAULT`](https://www.cockroachlabs.com/docs/v22.1/default-value) or [`ON UPDATE`](https://www.cockroachlabs.com/docs/v22.1/update) clause. This would cause parsing errors within tables. Now the `DEFAULT` or `ON UPDATE` clause is checked. [#71423][#71423] -- Added [`CREATE SEQUENCE AS `](https://www.cockroachlabs.com/docs/v22.1/create-sequence) option. [#57339][#57339] -- Introduced new SQL syntax [`ALTER RANGE RELOCATE`](https://www.cockroachlabs.com/docs/v22.1/configure-zone) to move a lease or replica between stores. This is helpful in an emergency situation to relocate data in the cluster. [#72305][#72305] -- [`EXPORT PARQUET`](https://www.cockroachlabs.com/docs/v22.1/export) can now export relations with `NULL` values to Parquet files. [#72530][#72530] -- Previously, [`ALTER TABLE ... RENAME TO ...`](https://www.cockroachlabs.com/docs/v22.1/alter-table#subcommands) would allow the user to move the table from a database to another if the table is being moved within one database's public schema to another. This is now disallowed. [#72000][#72000] -- [`ALTER DATABASE CONVERT TO SCHEMA`](https://www.cockroachlabs.com/docs/v22.1/alter-database#subcommands) is now disabled in v22.1 and later. [#72000][#72000] -- It is now possible to specify a different path for [incremental backups](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups). [#72713][#72713] -- If the `WITH GRANT OPTION` flag is present when granting privileges to a user, then that user is able to grant those same privileges to subsequent users; otherwise, they cannot. If the `GRANT OPTION FOR` flag is present when revoking privileges from a user, then only the ability to grant those privileges is revoked from that user, not the privileges themselves (otherwise both the privileges and the ability to grant those privileges are revoked). This behavior is consistent with PostgreSQL. [#72123][#72123] -- Disallowed `ST_MakePolygon` making empty polygons from empty [`LINESTRING`](https://www.cockroachlabs.com/docs/v22.1/linestring). This is not allowed in PostGIS. [#73489][#73489] -- [`EXPORT PARQUET`](https://www.cockroachlabs.com/docs/v22.1/export) now preserves column names and nullability. [#73382][#73382] -- Previously, the output from [`SHOW CREATE VIEW`](https://www.cockroachlabs.com/docs/v22.1/show-create#show-the-create-view-statement-for-a-view) returned on a single line. The format has now been improved to be more readable. [#73642][#73642] -- The output of the [`EXPLAIN`](https://www.cockroachlabs.com/docs/v22.1/explain) SQL statement has changed. Below the plan, index recommendations are now outputted for the SQL statement in question, if there are any. These index recommendations are indexes the user could add or indexes they could replace to make the given query faster. [#73302][#73302] -- The [`VOID` type](https://www.cockroachlabs.com/docs/v22.1/data-types) is now recognized. [#73488][#73488] -- In the experimental [`RELOCATE`](https://www.cockroachlabs.com/docs/v22.1/cockroachdb-feature-availability) syntax forms, the positional keyword that indicates that the statement should move non-voter replicas is now spelled `NONVOTERS`, instead of `NON_VOTERS`. [#73803][#73803] -- The inline help for the `ALTER` statements now mentions the `RELOCATE` syntax. [#73803][#73803] -- The experimental `ALTER RANGE...RELOCATE` syntax now accepts arbitrary [scalar expressions](https://www.cockroachlabs.com/docs/v22.1/scalar-expressions) as the source and target store IDs. [#73807][#73807] -- The output of `EXPLAIN ALTER RANGE ... RELOCATE` now includes the source and target store IDs. [#73807][#73807] -- The experimental `ALTER RANGE...RELOCATE` syntax now accepts arbitrary [scalar expressions](https://www.cockroachlabs.com/docs/v22.1/scalar-expressions) as the range ID when the `FOR` clause is not used. [#73807][#73807] -- The output of `EXPLAIN ALTER RANGE ... RELOCATE` now includes which replicas are subject to the relocation. [#73807][#73807] -- [`ALTER DEFAULT PRIVILEGES IN SCHEMA `](https://www.cockroachlabs.com/docs/v22.1/alter-default-privileges) is now supported. As well as specifying default privileges globally (within a database), users can now specify default privileges in a specific schema. When creating an object that has default privileges specified at the database (global) and at the schema level, the union of the default privileges is taken. [#73576][#73576] -- Index recommendations can be omitted from the [`EXPLAIN`](https://www.cockroachlabs.com/docs/v22.1/explain) plan if the `index_recommendations_enabled` session variable is set to false. [#73346][#73346] -- The output of `EXPLAIN ALTER INDEX/TABLE ... RELOCATE/SPLIT` now includes the target table/index name and, for the [`SPLIT AT`](https://www.cockroachlabs.com/docs/v22.1/split-at) variants, the expiry timestamp. [#73832][#73832] -- Added the `digest` and `hmac` [built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators). They match the PostgreSQL (pgcrypto) implementation. Supported hash algorithms are `md5`, `sha1`, `sha224`, `sha256`, `sha384`, and `sha512`. [#73935][#73935] -- Users can now [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore) (locality-aware) [incremental backups](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups) created with the `incremental_storage` parameter. [#73744][#73744] -- Improved [cost model](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) for TopK expressions if the input to TopK can be partially ordered by its sort columns. [#73459][#73459] -- Added the `incremental_storage` option to [`SHOW BACKUP`](https://www.cockroachlabs.com/docs/v22.1/show-backup) so users can now observe [incremental backups](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups). [#73357][#73357] -- Previously, escape character processing (`\`) was missing from constraint span generation, which resulted in incorrect results when doing escaped lookups. This is now fixed. [#73978][#73978] -- The shard column of a [hash-sharded index](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) is now a [virtual column](https://www.cockroachlabs.com/docs/v22.1/computed-columns) and not a stored computed column. [#74138][#74138] -- Clients waiting for a [schema change](https://www.cockroachlabs.com/docs/v22.1/online-schema-changes) job will now receive an error if the job they are waiting for is paused. [#74157][#74157] -- The [`GRANT`](https://www.cockroachlabs.com/docs/v22.1/grant#supported-privileges) privilege is deprecated in v22.1 and will be removed in v22.2 in favor of grant options. To promote backward compatibility for users with code still using `GRANT`, we will give grant options on every privilege a user has when they are granted `GRANT` and remove all their grant options when `GRANT` is revoked, in addition to the existing grant option behavior. [#74210][#74210] -- `system.protected_timestamp_records` table now has an additional `target` column that will store an encoded protocol buffer that represents the target a record protects. This target can either be the entire cluster, tenants, or schema objects (databases/tables). [#74281][#74281] -- The KV tracing of SQL queries (that could be obtained with `\set auto_trace=on,kv`) has been adjusted slightly. Previously, CockroachDB would fully decode the key in each key-value pair, even if some part of the key would not be decoded while tracing was enabled. Now, CockroachDB does not perform any extra decoding, and parts of the key that are not decoded are replaced with `?`. [#74236][#74236] -- CockroachDB now supports `default_with_oids`, which only accepts a `false` value. [#74499][#74499] -- [`EXPORT PARQUET`](https://www.cockroachlabs.com/docs/v22.1/export) can export columns of type [array](https://www.cockroachlabs.com/docs/v22.1/data-types) [#73735][#73735] -- Statements are now formatted prior to being sent to [the DB Console](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page). This is done using a new [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) that formats statements. [#73853][#73853] - -

Operational changes

- -- [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v22.1/cockroach-debug-zip) now includes the raw `system.settings` table. This table makes it possible to determine whether a [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) has been explicitly set. [#70498][#70498] -- The meaning of `sql.distsql.max_running_flows` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) has been extended so that when the value is negative, it will be multiplied by the number of CPUs on the node to get the maximum number of concurrent remote flows on the node. The default value is `-128`, meaning that a 4-CPU machine will have up to `512` concurrent remote DistSQL flows, but a 8-CPU machine will have up to `1024`. The previous default was `500`. [#71787][#71787] -- Some existing settings related to [`BACKUP`](https://www.cockroachlabs.com/docs/v22.1/backup) execution are now listed by [`SHOW CLUSTER SETTING`](https://www.cockroachlabs.com/docs/v22.1/show-cluster-setting). [#71962][#71962] -- The [cluster settings](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) affecting the admission control system enablement are now set to defaults that enable [admission control](https://www.cockroachlabs.com/docs/v22.1/admission-control). [#68535][#68535] -- The default value of the `kv.rangefeed.catchup_scan_iterator_optimization.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) is now `true`. [#73473][#73473] -- Added a metric `addsstable.aswrites` that tracks the number of `AddSSTable` requests ingested as regular write batches. [#73910][#73910] -- Added a metric `replica.uninitialized` that tracks the number of `Uninitialized` replicas in a store. [#73975][#73975] - -

Command-line changes

- -- [`cockroach demo`](https://www.cockroachlabs.com/docs/v22.1/cockroach-demo) will now begin processing scheduled jobs after 15 seconds, instead of the 2–5 minutes in a production environment. [#70242][#70242] -- The 25 max QPS rate limit for workloads on [`cockroach demo`](https://www.cockroachlabs.com/docs/v22.1/cockroach-demo) can now be configured with a `--workload-max-qps` flag. [#70642][#70642] -- The SQL shell now supports the `\du USER` command to show information for the current user. [#70609][#70609] -- Added support for a CLI shortcut that displays [constraint](https://www.cockroachlabs.com/docs/v22.1/constraints) information similar to PostgreSQL. The shortcut is `\dd TABLE`. [#69783][#69783] -- Added a `--read-only` flag to [`cockroach sql`](https://www.cockroachlabs.com/docs/v22.1/cockroach-sql) which will set the `default_session_read_only` variable upon connecting. This is effectively equivalent to the `PGTARGETSESSIONATTRS=read-only` option added to libpq and `psql` in PostgreSQL 13. [#71003][#71003] -- Previously, [`cockroach debug merge-logs`](https://www.cockroachlabs.com/docs/v22.1/cockroach-debug-merge-logs) output was prefixed by a short machine name by default, which made it difficult to identify the originating node when looking at the merged results. CockroachDB now supports `"${fpath}"` in the `--prefix` argument. [#71254][#71254] -- Added an option in the [`cockroach demo movr`](https://www.cockroachlabs.com/docs/v22.1/cockroach-demo) command to populate the `user_promo_code` table. [#61531][#61531] -- Allowed demoing of CockroachDB's multi-tenant features via the `--multitenant` flag to [`cockroach demo`](https://www.cockroachlabs.com/docs/v22.1/cockroach-demo). [#71026][#71026] -- [`cockroach demo`](https://www.cockroachlabs.com/docs/v22.1/cockroach-demo) now runs by default in multi-tenant mode. [#71988][#71988] -- Added buffering to log sinks. This can be configured with the new `"buffering"` field on any log sink provided via the `--log` or `--log-config-file` flags. [#70330][#70330] -- The server identifiers (cluster ID, node ID, tenant ID, instance ID) are no longer duplicated at the start of every new [log file](https://www.cockroachlabs.com/docs/v22.1/configure-logs#output-to-files) (during log file rotations). They are now only logged when known during server start-up. (The copy of the identifiers is still included in per-event envelopes for the various [`json` output logging formats](https://www.cockroachlabs.com/docs/v22.1/log-formats#format-json).) [#73306][#73306] -- The [`cockroach node drain`](https://www.cockroachlabs.com/docs/v22.1/cockroach-node) command is now able to drain a node by ID, specified on the command line, from another node in the cluster. It now also supports the flag `--self` for symmetry with [`node decommission`](https://www.cockroachlabs.com/docs/v22.1/cockroach-node#node-decommission). Using `node drain` without either `--self` or a node ID is now deprecated. [#73991][#73991] -- The deprecated command `cockroach quit` now accepts the flags `--self` and the ability to specify a node ID like `cockroach node drain`. Even though the command is deprecated, this change was performed to ensure symmetry in the documentation until the command is effectively removed. [#73991][#73991] -- Not finding the right certificates in the `certs` directory, or not specifying a `certs` directory or certificate path, will now fall back on checking server CA using Go's TLS code to find the certificates in the OS trust store. If no matching certificate is found, then an `x509` error will occur announcing that the certificate is signed by an unknown authority. [#73776][#73776] - -

API endpoint changes

- -- [`CREATE CHANGEFEED`](https://www.cockroachlabs.com/docs/v22.1/create-changefeed) on a cloud storage sink now allows a new query parameter to specify how the file paths are partitioned. For example, `partition_format="daily"` represents the default behavior of splitting into dates `(2021-05-01/)`. While `partition_format="hourly"` will further partition them by hour `(2021-05-01/05/)`. `partition_format="flat"` will not partition at all. [#70207][#70207] -- [OpenID Connect (OIDC)](https://www.cockroachlabs.com/docs/v22.1/sso#cluster-settings) support for DB Console is no longer marked as `experimental`. [#71183][#71183] -- Added new API endpoint for getting a table's index statistics. [#72660][#72660] -- Added a new batch RPC, and batch method counters are now visible in DB Console and [`_status/vars`](https://www.cockroachlabs.com/docs/v22.1/monitoring-and-alerting). [#72767][#72767] - -

DB Console changes

- -- Fixed drag to zoom on [custom charts](https://www.cockroachlabs.com/docs/v22.1/ui-custom-chart-debug-page#use-the-custom-chart-page). [#70229][#70229] -- Fixed drag to time range for a specific window issue. [#70326][#70326] -- Added pre-sizing calculation for [**Metrics**](https://www.cockroachlabs.com/docs/v22.1/ui-overview-dashboard) page graphs. [#70838][#70838] -- The `/debug/pprof/goroutineui/` page has a new and improved look. [#71690][#71690] -- The all nodes report now notifies a user if they need more privileges to view the page's information. [#71960][#71960] -- The [**Advanced Debug**](https://www.cockroachlabs.com/docs/v22.1/ui-debug-pages) page now contains an additional link under the **Metrics** header called Rules. This endpoint exposes [Prometheus-compatible alerting](https://www.cockroachlabs.com/docs/v22.1/monitoring-and-alerting#events-to-alert-on) and aggregation rules for CockroachDB metrics. [#72677][#72677] -- Added an **Index Stats** table and a button to clear index usage stats on the [Table Details](https://www.cockroachlabs.com/docs/v22.1/ui-databases-page#table-details) page for each table. [#72948][#72948] -- Added the ability to remove the dashed underline from sorted table headers for headers with no tooltips. Removed the dashed underline from the **Index Stats** table headers. [#73455][#73455] -- Added a new [**Index Details**](https://www.cockroachlabs.com/docs/v22.1/ui-databases-page#index-details) page, which exists for each index on a table. [#73178][#73178] -- Updated the **Reset Index Stats** button text to be more clear. [#73700][#73700] -- The time pickers on the [**Statements**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [**Transactions**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) pages now have the same style and functionality as the time picker on the [**Metrics**](https://www.cockroachlabs.com/docs/v22.1/ui-overview-dashboard) page. [#73608][#73608] -- The **clear SQL stats** links on the [**Statements**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [**Transactions**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) pages were relabeled **reset SQL stats**, for consistency with the language in the SQL shell. [#73922][#73922] -- Added the ability to create conditional statement diagnostics by adding two new fields: 1) minimum execution latency, which specifies the limit for when a statement should be tracked, and 2) expiry time, which specifies when a diagnostics request should expire. [#74112][#74112] -- The **Terminate Session** and **Terminate Query** buttons are again available to be enabled on the [**Sessions Page**](https://www.cockroachlabs.com/docs/v22.1/ui-sessions-page). [#74408][#74408] -- Added formatting to statements on the [**Statements**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page), [**Transactions**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page), and **Index Details** pages. [#73853][#73853] -- Updated colors for **Succeeded** badges and the progress bar on the [**Jobs**](https://www.cockroachlabs.com/docs/v22.1/ui-jobs-page) page. [#73924][#73924] - -

Bug fixes

- -- Fixed a bug where `CURRENT_USER` and [`SESSION_USER`](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#special-syntax-forms) were parsed incorrectly. [#70439][#70439] -- Fixed a bug where [index](https://www.cockroachlabs.com/docs/v22.1/indexes)/[partition](https://www.cockroachlabs.com/docs/v22.1/partitioning) subzones may not have inherited the `global_reads` field correctly in some cases from their parent. [#69983][#69983] -- Previously, [`DROP DATABASE CASCADE`](https://www.cockroachlabs.com/docs/v22.1/drop-database) could fail while resolving a schema in certain scenarios with the following error: `ERROR: error resolving referenced table ID : descriptor is being dropped`. This is now fixed. [#69789][#69789] -- [Backfills](https://www.cockroachlabs.com/docs/v22.1/changefeed-messages#schema-changes-with-column-backfill) will now always respect the most up-to-date value of `changefeed.backfill.concurrent_scan_requests` even during an ongoing backfill. [#69933][#69933] -- The [`cockroach debug merge-logs`](https://www.cockroachlabs.com/docs/v22.1/cockroach-debug-merge-logs) command no longer returns an error when the log decoder attempts to parse older logs. [#68282][#68282] -- The PostgreSQL-compatible "Access Privilege Inquiry Functions" (e.g., `has_foo_privilege`) were incorrectly returning whether all comma-separated privileges were held, instead of whether any of the provided privileges were held. This incompatibility has been resolved. [#69939][#69939] -- Queries involving arrays of tuples will no longer spuriously fail due to an encoding error. [#63996][#63996] -- [`cockroach sql -e`](https://www.cockroachlabs.com/docs/v22.1/cockroach-sql) (and `demo -e`) can now process all client-side commands, not just `\echo`, `\set`, and a few others. [#70671][#70671] -- [`cockroach sql --set=auto_trace=on -e 'select ...'`](https://www.cockroachlabs.com/docs/v22.1/cockroach-sql) (and the similar `demo` command) now produces an execution trace properly. [#70671][#70671] -- Previously, bulk `INSERT`/`UPDATE` in implicit transactions retried indefinitely if the statement exceeded the default leasing deadline of 5 minutes. Now, if the leasing deadline is exceeded this will be raised back up to the SQL layer to refresh the deadline before trying to commit. [#69936][#69936] -- [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import) now respects the spatial index storage options specified in `PGDUMP` files on indexes it creates. [#66903][#66903] -- Fixed `IMPORT` in [`tpcc`](https://www.cockroachlabs.com/docs/v22.1/cockroach-workload#tpcc-workload) workload. [#71013][#71013] -- Some query patterns that previously could cause a single node to become a [hotspot](https://www.cockroachlabs.com/docs/v22.1/query-behavior-troubleshooting#single-hot-node) have been fixed so that the load is evenly distributed across the whole cluster. [#70648][#70648] -- Fixed a bug where the 2-parameter `setval` built-in function previously caused the [sequence](https://www.cockroachlabs.com/docs/v22.1/create-sequence) to increment incorrectly one extra time. For a sequence to increment, use `setval(seq, val, true)`. [#71643][#71643] -- Previously, the effects of the `setval` and `nextval` built-in functions would be rolled back if the surrounding transaction was rolled back. This was not correct, as `setval` is not supposed to respect transaction boundaries. This is now fixed. [#71643][#71643] -- In v21.2, jobs that fail to revert are retried unconditionally, but with exponential backoff. In the mixed-version state there is no exponential backoff, so it would not be good to retry unconditionally. The behavior has been changed such that before v21.2 is finalized, these jobs will enter the revert-failed state as in v21.1. [#71780][#71780] -- Fixed a bug that prevented rollback of [`ALTER PRIMARY KEY`](https://www.cockroachlabs.com/docs/v22.1/alter-primary-key) when the old primary key was interleaved. [#71780][#71780] -- Previously, adding new values to a user-defined [`ENUM`](https://www.cockroachlabs.com/docs/v22.1/enum) type would cause a prepared statement using that type to not work. This now works as expected. [#71632][#71632] -- Previously, when records and [`ENUM`](https://www.cockroachlabs.com/docs/v22.1/enum) types containing escape sequences were shown in the CLI, they would be incorrectly double-escaped. This is now fixed. [#71916][#71916] -- `SCHEMA CHANGE` and `SCHEMA CHANGE GC` jobs following a `DROP ... CASCADE` now have sensible names, instead of `''` and `'GC for '`, respectively. [#70630][#70630] -- Fixed a race condition that could have caused [core changefeeds](https://www.cockroachlabs.com/docs/v22.1/changefeed-for) whose targeted table became invalid to not explain why when shutting down. [#72490][#72490] -- [`cockroach demo`](https://www.cockroachlabs.com/docs/v22.1/cockroach-demo) can now be launched with `--global` and `--multitenant=true` options. [#72750][#72750] -- Y-axis labels on [custom charts](https://www.cockroachlabs.com/docs/v22.1/ui-custom-chart-debug-page#use-the-custom-chart-page) no longer display `undefined`. [#73055][#73055] -- Raft snapshots now detect timeouts earlier and avoid spamming the logs with [`context deadline exceeded`](https://www.cockroachlabs.com/docs/v22.1/common-errors#context-deadline-exceeded) errors. [#73279][#73279] -- Error messages produced during import are now truncated. Previously, [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import) could potentially generate large error messages that could not be persisted to the jobs table, resulting in a failed import never entering the failed state and instead retrying repeatedly. [#73303][#73303] -- Servers no longer crash due to panics in HTTP handlers. [#72395][#72395] -- `crdb_internal.table_indexes` now shows if an index is sharded or not. [#73380][#73380] -- Previously, creating indexes with special characters would fail to identify indexes with the same matching name, which caused an internal error. This is now fixed. [#73367][#73367] -- CockroachDB now prohibits mixed dimension [`LINESTRING`](https://www.cockroachlabs.com/docs/v22.1/linestring) in `ST_MakePolygon`. [#73489][#73489] -- Index `CREATE` statements in the `pg_indexes` table now shows a hash-sharding bucket count if an index is hash sharded. Column direction is removed from `gin` index in `pg_indexes`. [#73491][#73491] -- Uninitialized replicas that are abandoned after an unsuccessful snapshot no longer perform periodic background work, so they no longer have a non-negligible cost. [#73362][#73362] -- Fixed a bug that caused incorrect evaluation of placeholder values in `EXECUTE` statements. The bug presented when the `PREPARE` statement cast a placeholder value, e.g., `PREPARE s AS SELECT $1::INT2`. If the assigned value for `$1` exceeded the maximum width value of the cast target type, the result value of the cast could be incorrect. This bug had been present since v19.1 or earlier. [#73762][#73762] -- Previously, during [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore) `system.namespace` entry wouldn't be inserted for synthetic public schemas. This is now fixed. [#73875][#73875] -- Fixed a bug that caused internal errors when altering the primary key of a table. The bug was only present if the table had a partial index with a predicate that referenced a [virtual computed column](https://www.cockroachlabs.com/docs/v22.1/computed-columns). This bug was present since virtual computed columns were added in v21.1.0. [#74102][#74102] -- [Foreign keys](https://www.cockroachlabs.com/docs/v22.1/foreign-key) referencing a hash-sharded key will not fail anymore. [#74140][#74140] -- Raft snapshots no longer risk starvation under very high concurrency. Before this fix, it was possible that many of Raft snapshots could be starved and prevented from succeeding due to timeouts, which were accompanied by errors like [`error rate limiting bulk io write: context deadline exceeded`](https://www.cockroachlabs.com/docs/v22.1/common-errors#context-deadline-exceeded). [#73288][#73288] -- Portals in the extended protocol of the PostgreSQL wire protocol can now be used from implicit transactions and can be executed multiple times if there is a row-count limit applied to the portal. Previously, trying to execute the same portal twice would result in an `unknown portal` error. [#74242][#74242] -- Fixed a bug that incorrectly allowed creating [computed column](https://www.cockroachlabs.com/docs/v22.1/computed-columns) expressions, expression indexes, and partial index predicate expressions with mutable casts between [`STRING` types](https://www.cockroachlabs.com/docs/v22.1/data-types) and the types `REGCLASS`, `REGNAMESPACE`, `REGPROC`, `REGPROCEDURE`, `REGROLE`, and `REGTYPE`. Creating such computed columns, expression indexes, and partial indexes is now prohibited. Any tables with these types of expressions may be corrupt and should be dropped and recreated. [#74286][#74286] -- Fixed a bug that, in very rare cases, could result in a node terminating with a fatal error: `unable to remove placeholder: corrupted replicasByKey map`. To avoid potential data corruption, users affected by this crash should not restart the node, but instead [decommission it](https://www.cockroachlabs.com/docs/v22.1/node-shutdown?filters=decommission) in absentia and have it rejoin the cluster under a new `nodeID`. [#73734][#73734] -- Previously, when [foreign keys](https://www.cockroachlabs.com/docs/v22.1/foreign-key) were included inside an `ADD COLUMN` statement and multiple columns were added in a single statement then the first added column would have the foreign key applied (or an error generated based on the wrong column). This is now fixed. [#74411][#74411] -- Previously, a double-nested [`ENUM`](https://www.cockroachlabs.com/docs/v22.1/enum) in a DistSQL query would not get hydrated on remote nodes resulting in panic. This is now fixed. [#74189][#74189] -- Fixed a panic when attempting to access the hottest ranges (e.g., via the `/_status/hotranges` endpoint) before initial statistics had been gathered. [#74507][#74507] -- Previously, setting [`sslmode=require`](https://www.cockroachlabs.com/docs/v22.1/connection-parameters#additional-connection-parameters) would check for local certificates, so omitting a certs path would cause an error even though `require` does not verify server certificates. This has been fixed by bypassing certificate path checking for `sslmode=require`. This bug had been present since v21.2.0. [#73776][#73776] -- Previously, incorrect results would be returned, or internal errors, on queries with window functions returning [`INT`](https://www.cockroachlabs.com/docs/v22.1/data-types), `FLOAT`, `BYTES`, `STRING`, `UUID`, or `JSON` type when the [disk spilling](https://www.cockroachlabs.com/docs/v22.1/vectorized-execution#disk-spilling-operations) occurred. The bug was introduced in v21.2.0 and is now fixed. [#74491][#74491] -- Previously, `MIN`/`MAX` could be incorrectly calculated when used as window functions in some cases after [spilling to disk](https://www.cockroachlabs.com/docs/v22.1/vectorized-execution#disk-spilling-operations). The bug was introduced in v21.2.0 and is now fixed. [#74491][#74491] -- Previously, [`IMPORT TABLE ... PGDUMP`](https://www.cockroachlabs.com/docs/v22.1/import#import-a-table-from-a-postgresql-database-dump) with a `COPY FROM` statement in the dump file that has less target columns than the `CREATE TABLE` schema definition would result in a nil pointer exception. This is now fixed. [#74601][#74601] - -

Performance improvements

- -- Mutation statements with a `RETURNING` clause that are not inside an explicit transaction are faster in some cases. [#70200][#70200] -- Added collection of basic table statistics during an [import](https://www.cockroachlabs.com/docs/v22.1/import), to help the optimizer until full statistics collection completes. [#67106][#67106] -- The accuracy of histogram calculations for `BYTES` types has been improved. As a result, the [optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) should generate more efficient query plans in some cases. [#68740][#68740] -- A [`SELECT`](https://www.cockroachlabs.com/docs/v22.1/selection-queries) query with both `MIN(LeadingIndexColumn)` and `MAX(LeadingIndexColumn)` can now be performed with two `LIMITED SCAN`s instead of a single `FULL SCAN`. [#70496][#70496] -- A [`SELECT`](https://www.cockroachlabs.com/docs/v22.1/selection-queries) query from a single table with more than one `MIN` or `MAX` scalar aggregate expression and a `WHERE` clause can now be performed with `LIMITED SCAN`s, one per aggregate expression, instead of a single `FULL SCAN`. Note: No other aggregate function, such as `SUM`, may be present in the query block in order for it to be eligible for this transformation. This optimization should occur when each `MIN` or `MAX` expression involves a leading index column, so that a sort is not required for the limit operation, and the resulting query plan will appear cheapest to the optimizer. [#70854][#70854] -- Queries with many ORed `WHERE` clause predicates previously took an excessive amount of time for the [optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) to process, especially if the predicates involved index columns, and if there were more than 1000 predicates (which could happen with application-generated SQL). To fix this, the processing of SQL with many ORed predicates have been optimized to make sure a query plan can be generated in seconds instead of minutes or hours. [#71247][#71247] -- Creating many [schema changes](https://www.cockroachlabs.com/docs/v22.1/online-schema-changes) in parallel now runs faster due to improved concurrency notifying the jobs subsystem. [#71909][#71909] -- The `sqlinstance` subsystem no longer reads from the backing SQL table for every request for SQL instance details. This will result in improved performance for supporting multi-region setup for the multi-tenant architecture. [#69976][#69976] -- Improved efficiency of looking up old historical descriptors. [#71239][#71239] -- Improved performance of some `GROUP BY` queries with a `LIMIT` if there is an index ordering that matches a subset of the grouping columns. In this case the total number of aggregations needed to satisfy the `LIMIT` can be emitted without scanning the entire input, enabling the execution to be more effective. [#71546][#71546] -- [`var_pop` and `stddev_pop`](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) aggregate functions are now evaluated more efficiently in a distributed setting. [#73712][#73712] -- Improved job performance in the face of concurrent schema changes by reducing contention. [#72297][#72297] -- [Incremental backups](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups) now use less memory to verify coverage of prior backups. [#74393][#74393] -- CockroachDB now retrieves the password credentials of the SQL client concurrently without waiting for the password response during the authentication exchange. This can yield a small latency reduction in new SQL connections. [#74365][#74365] -- CockroachDB now allows rangefeed streams to use separate http connection when `kv.rangefeed.use_dedicated_connection_class.enabled` setting is turned on. Using separate connection class reduces the possibility of OOMs when running rangefeeds against very large tables. The connection window size for rangefeeds can be adjusted via `COCKROACH_RANGEFEED_INITIAL_WINDOW_SIZE` environment variable, whose default is 128KB. [#74222][#74222] -- The merging of [incremental backup](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups) layers during [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore) now uses a simpler and less memory-intensive algorithm. [#74394][#74394] -- The default snapshot recovery/rebalance rates `kv.snapshot_rebalance.max_rate` and `kv.snapshot_recovery.max_rate` were bumped from 8MB/s to 32MB/s. Production experience has taught us that earlier values were too conservative. Users might observe higher network utilization during rebalancing/recovery in service of rebalancing/recovering faster (for the latter, possibly reducing the MTTF). If the extra utilization is undesirable, users can manually revert these rates back to their original settings of 8 MB/s. [#71814][#71814] - -

Build changes

- -- Upgraded to new version of Go v1.17. [#69603][#69603] - -

Miscellaneous

- -

Docker

- -- Env variables and init scripts in `docker-entrypoint-initdb.d` for the [`start-single-node`](https://www.cockroachlabs.com/docs/v22.1/cockroach-start-single-node) command are now supported. [#70238][#70238] - -
- -

Contributors

- -This release includes 1720 merged PRs by 132 authors. - -We would like to thank the following contributors from the CockroachDB community: - -- Catherine J (first-time contributor) -- Eudald (first-time contributor) -- Ganeshprasad Biradar -- Josh Soref (first-time contributor) -- Max Neverov -- Miguel Novelo (first-time contributor) -- Paul Lin (first-time contributor) -- Remy Wang (first-time contributor) -- Rupesh Harode -- TennyZhuang (first-time contributor) -- Tharun -- Ulf Adams -- Zhou Fang (first-time contributor) -- lpessoa (first-time contributor) -- mnovelodou (first-time contributor) -- neeral -- shralex (first-time contributor) -- tukeJonny (first-time contributor) - -
- -[#57339]: https://github.com/cockroachdb/cockroach/pull/57339 -[#61531]: https://github.com/cockroachdb/cockroach/pull/61531 -[#63996]: https://github.com/cockroachdb/cockroach/pull/63996 -[#64503]: https://github.com/cockroachdb/cockroach/pull/64503 -[#65599]: https://github.com/cockroachdb/cockroach/pull/65599 -[#66903]: https://github.com/cockroachdb/cockroach/pull/66903 -[#67106]: https://github.com/cockroachdb/cockroach/pull/67106 -[#67737]: https://github.com/cockroachdb/cockroach/pull/67737 -[#68140]: https://github.com/cockroachdb/cockroach/pull/68140 -[#68255]: https://github.com/cockroachdb/cockroach/pull/68255 -[#68282]: https://github.com/cockroachdb/cockroach/pull/68282 -[#68535]: https://github.com/cockroachdb/cockroach/pull/68535 -[#68705]: https://github.com/cockroachdb/cockroach/pull/68705 -[#68740]: https://github.com/cockroachdb/cockroach/pull/68740 -[#68964]: https://github.com/cockroachdb/cockroach/pull/68964 -[#69215]: https://github.com/cockroachdb/cockroach/pull/69215 -[#69292]: https://github.com/cockroachdb/cockroach/pull/69292 -[#69293]: https://github.com/cockroachdb/cockroach/pull/69293 -[#69300]: https://github.com/cockroachdb/cockroach/pull/69300 -[#69314]: https://github.com/cockroachdb/cockroach/pull/69314 -[#69603]: https://github.com/cockroachdb/cockroach/pull/69603 -[#69609]: https://github.com/cockroachdb/cockroach/pull/69609 -[#69783]: https://github.com/cockroachdb/cockroach/pull/69783 -[#69789]: https://github.com/cockroachdb/cockroach/pull/69789 -[#69909]: https://github.com/cockroachdb/cockroach/pull/69909 -[#69911]: https://github.com/cockroachdb/cockroach/pull/69911 -[#69913]: https://github.com/cockroachdb/cockroach/pull/69913 -[#69933]: https://github.com/cockroachdb/cockroach/pull/69933 -[#69936]: https://github.com/cockroachdb/cockroach/pull/69936 -[#69939]: https://github.com/cockroachdb/cockroach/pull/69939 -[#69976]: https://github.com/cockroachdb/cockroach/pull/69976 -[#69981]: https://github.com/cockroachdb/cockroach/pull/69981 -[#69983]: https://github.com/cockroachdb/cockroach/pull/69983 -[#70041]: https://github.com/cockroachdb/cockroach/pull/70041 -[#70066]: https://github.com/cockroachdb/cockroach/pull/70066 -[#70100]: https://github.com/cockroachdb/cockroach/pull/70100 -[#70115]: https://github.com/cockroachdb/cockroach/pull/70115 -[#70200]: https://github.com/cockroachdb/cockroach/pull/70200 -[#70207]: https://github.com/cockroachdb/cockroach/pull/70207 -[#70222]: https://github.com/cockroachdb/cockroach/pull/70222 -[#70226]: https://github.com/cockroachdb/cockroach/pull/70226 -[#70229]: https://github.com/cockroachdb/cockroach/pull/70229 -[#70238]: https://github.com/cockroachdb/cockroach/pull/70238 -[#70242]: https://github.com/cockroachdb/cockroach/pull/70242 -[#70326]: https://github.com/cockroachdb/cockroach/pull/70326 -[#70330]: https://github.com/cockroachdb/cockroach/pull/70330 -[#70332]: https://github.com/cockroachdb/cockroach/pull/70332 -[#70334]: https://github.com/cockroachdb/cockroach/pull/70334 -[#70338]: https://github.com/cockroachdb/cockroach/pull/70338 -[#70355]: https://github.com/cockroachdb/cockroach/pull/70355 -[#70369]: https://github.com/cockroachdb/cockroach/pull/70369 -[#70423]: https://github.com/cockroachdb/cockroach/pull/70423 -[#70439]: https://github.com/cockroachdb/cockroach/pull/70439 -[#70444]: https://github.com/cockroachdb/cockroach/pull/70444 -[#70454]: https://github.com/cockroachdb/cockroach/pull/70454 -[#70496]: https://github.com/cockroachdb/cockroach/pull/70496 -[#70498]: https://github.com/cockroachdb/cockroach/pull/70498 -[#70513]: https://github.com/cockroachdb/cockroach/pull/70513 -[#70522]: https://github.com/cockroachdb/cockroach/pull/70522 -[#70581]: https://github.com/cockroachdb/cockroach/pull/70581 -[#70590]: https://github.com/cockroachdb/cockroach/pull/70590 -[#70591]: https://github.com/cockroachdb/cockroach/pull/70591 -[#70604]: https://github.com/cockroachdb/cockroach/pull/70604 -[#70609]: https://github.com/cockroachdb/cockroach/pull/70609 -[#70618]: https://github.com/cockroachdb/cockroach/pull/70618 -[#70630]: https://github.com/cockroachdb/cockroach/pull/70630 -[#70642]: https://github.com/cockroachdb/cockroach/pull/70642 -[#70648]: https://github.com/cockroachdb/cockroach/pull/70648 -[#70656]: https://github.com/cockroachdb/cockroach/pull/70656 -[#70658]: https://github.com/cockroachdb/cockroach/pull/70658 -[#70671]: https://github.com/cockroachdb/cockroach/pull/70671 -[#70693]: https://github.com/cockroachdb/cockroach/pull/70693 -[#70716]: https://github.com/cockroachdb/cockroach/pull/70716 -[#70722]: https://github.com/cockroachdb/cockroach/pull/70722 -[#70726]: https://github.com/cockroachdb/cockroach/pull/70726 -[#70792]: https://github.com/cockroachdb/cockroach/pull/70792 -[#70838]: https://github.com/cockroachdb/cockroach/pull/70838 -[#70854]: https://github.com/cockroachdb/cockroach/pull/70854 -[#70865]: https://github.com/cockroachdb/cockroach/pull/70865 -[#70881]: https://github.com/cockroachdb/cockroach/pull/70881 -[#70942]: https://github.com/cockroachdb/cockroach/pull/70942 -[#71003]: https://github.com/cockroachdb/cockroach/pull/71003 -[#71013]: https://github.com/cockroachdb/cockroach/pull/71013 -[#71026]: https://github.com/cockroachdb/cockroach/pull/71026 -[#71058]: https://github.com/cockroachdb/cockroach/pull/71058 -[#71083]: https://github.com/cockroachdb/cockroach/pull/71083 -[#71138]: https://github.com/cockroachdb/cockroach/pull/71138 -[#71183]: https://github.com/cockroachdb/cockroach/pull/71183 -[#71239]: https://github.com/cockroachdb/cockroach/pull/71239 -[#71247]: https://github.com/cockroachdb/cockroach/pull/71247 -[#71254]: https://github.com/cockroachdb/cockroach/pull/71254 -[#71257]: https://github.com/cockroachdb/cockroach/pull/71257 -[#71259]: https://github.com/cockroachdb/cockroach/pull/71259 -[#71299]: https://github.com/cockroachdb/cockroach/pull/71299 -[#71326]: https://github.com/cockroachdb/cockroach/pull/71326 -[#71330]: https://github.com/cockroachdb/cockroach/pull/71330 -[#71357]: https://github.com/cockroachdb/cockroach/pull/71357 -[#71423]: https://github.com/cockroachdb/cockroach/pull/71423 -[#71427]: https://github.com/cockroachdb/cockroach/pull/71427 -[#71429]: https://github.com/cockroachdb/cockroach/pull/71429 -[#71432]: https://github.com/cockroachdb/cockroach/pull/71432 -[#71482]: https://github.com/cockroachdb/cockroach/pull/71482 -[#71498]: https://github.com/cockroachdb/cockroach/pull/71498 -[#71501]: https://github.com/cockroachdb/cockroach/pull/71501 -[#71537]: https://github.com/cockroachdb/cockroach/pull/71537 -[#71546]: https://github.com/cockroachdb/cockroach/pull/71546 -[#71594]: https://github.com/cockroachdb/cockroach/pull/71594 -[#71632]: https://github.com/cockroachdb/cockroach/pull/71632 -[#71643]: https://github.com/cockroachdb/cockroach/pull/71643 -[#71653]: https://github.com/cockroachdb/cockroach/pull/71653 -[#71685]: https://github.com/cockroachdb/cockroach/pull/71685 -[#71690]: https://github.com/cockroachdb/cockroach/pull/71690 -[#71772]: https://github.com/cockroachdb/cockroach/pull/71772 -[#71780]: https://github.com/cockroachdb/cockroach/pull/71780 -[#71787]: https://github.com/cockroachdb/cockroach/pull/71787 -[#71814]: https://github.com/cockroachdb/cockroach/pull/71814 -[#71823]: https://github.com/cockroachdb/cockroach/pull/71823 -[#71868]: https://github.com/cockroachdb/cockroach/pull/71868 -[#71871]: https://github.com/cockroachdb/cockroach/pull/71871 -[#71873]: https://github.com/cockroachdb/cockroach/pull/71873 -[#71890]: https://github.com/cockroachdb/cockroach/pull/71890 -[#71896]: https://github.com/cockroachdb/cockroach/pull/71896 -[#71909]: https://github.com/cockroachdb/cockroach/pull/71909 -[#71915]: https://github.com/cockroachdb/cockroach/pull/71915 -[#71916]: https://github.com/cockroachdb/cockroach/pull/71916 -[#71960]: https://github.com/cockroachdb/cockroach/pull/71960 -[#71962]: https://github.com/cockroachdb/cockroach/pull/71962 -[#71971]: https://github.com/cockroachdb/cockroach/pull/71971 -[#71985]: https://github.com/cockroachdb/cockroach/pull/71985 -[#71988]: https://github.com/cockroachdb/cockroach/pull/71988 -[#72000]: https://github.com/cockroachdb/cockroach/pull/72000 -[#72014]: https://github.com/cockroachdb/cockroach/pull/72014 -[#72056]: https://github.com/cockroachdb/cockroach/pull/72056 -[#72113]: https://github.com/cockroachdb/cockroach/pull/72113 -[#72123]: https://github.com/cockroachdb/cockroach/pull/72123 -[#72161]: https://github.com/cockroachdb/cockroach/pull/72161 -[#72297]: https://github.com/cockroachdb/cockroach/pull/72297 -[#72305]: https://github.com/cockroachdb/cockroach/pull/72305 -[#72330]: https://github.com/cockroachdb/cockroach/pull/72330 -[#72365]: https://github.com/cockroachdb/cockroach/pull/72365 -[#72395]: https://github.com/cockroachdb/cockroach/pull/72395 -[#72435]: https://github.com/cockroachdb/cockroach/pull/72435 -[#72490]: https://github.com/cockroachdb/cockroach/pull/72490 -[#72502]: https://github.com/cockroachdb/cockroach/pull/72502 -[#72530]: https://github.com/cockroachdb/cockroach/pull/72530 -[#72534]: https://github.com/cockroachdb/cockroach/pull/72534 -[#72545]: https://github.com/cockroachdb/cockroach/pull/72545 -[#72584]: https://github.com/cockroachdb/cockroach/pull/72584 -[#72595]: https://github.com/cockroachdb/cockroach/pull/72595 -[#72660]: https://github.com/cockroachdb/cockroach/pull/72660 -[#72667]: https://github.com/cockroachdb/cockroach/pull/72667 -[#72677]: https://github.com/cockroachdb/cockroach/pull/72677 -[#72679]: https://github.com/cockroachdb/cockroach/pull/72679 -[#72713]: https://github.com/cockroachdb/cockroach/pull/72713 -[#72734]: https://github.com/cockroachdb/cockroach/pull/72734 -[#72738]: https://github.com/cockroachdb/cockroach/pull/72738 -[#72750]: https://github.com/cockroachdb/cockroach/pull/72750 -[#72767]: https://github.com/cockroachdb/cockroach/pull/72767 -[#72908]: https://github.com/cockroachdb/cockroach/pull/72908 -[#72948]: https://github.com/cockroachdb/cockroach/pull/72948 -[#73055]: https://github.com/cockroachdb/cockroach/pull/73055 -[#73178]: https://github.com/cockroachdb/cockroach/pull/73178 -[#73260]: https://github.com/cockroachdb/cockroach/pull/73260 -[#73279]: https://github.com/cockroachdb/cockroach/pull/73279 -[#73288]: https://github.com/cockroachdb/cockroach/pull/73288 -[#73302]: https://github.com/cockroachdb/cockroach/pull/73302 -[#73303]: https://github.com/cockroachdb/cockroach/pull/73303 -[#73306]: https://github.com/cockroachdb/cockroach/pull/73306 -[#73321]: https://github.com/cockroachdb/cockroach/pull/73321 -[#73342]: https://github.com/cockroachdb/cockroach/pull/73342 -[#73346]: https://github.com/cockroachdb/cockroach/pull/73346 -[#73357]: https://github.com/cockroachdb/cockroach/pull/73357 -[#73362]: https://github.com/cockroachdb/cockroach/pull/73362 -[#73367]: https://github.com/cockroachdb/cockroach/pull/73367 -[#73380]: https://github.com/cockroachdb/cockroach/pull/73380 -[#73382]: https://github.com/cockroachdb/cockroach/pull/73382 -[#73392]: https://github.com/cockroachdb/cockroach/pull/73392 -[#73455]: https://github.com/cockroachdb/cockroach/pull/73455 -[#73459]: https://github.com/cockroachdb/cockroach/pull/73459 -[#73473]: https://github.com/cockroachdb/cockroach/pull/73473 -[#73488]: https://github.com/cockroachdb/cockroach/pull/73488 -[#73489]: https://github.com/cockroachdb/cockroach/pull/73489 -[#73491]: https://github.com/cockroachdb/cockroach/pull/73491 -[#73576]: https://github.com/cockroachdb/cockroach/pull/73576 -[#73608]: https://github.com/cockroachdb/cockroach/pull/73608 -[#73642]: https://github.com/cockroachdb/cockroach/pull/73642 -[#73648]: https://github.com/cockroachdb/cockroach/pull/73648 -[#73700]: https://github.com/cockroachdb/cockroach/pull/73700 -[#73712]: https://github.com/cockroachdb/cockroach/pull/73712 -[#73734]: https://github.com/cockroachdb/cockroach/pull/73734 -[#73735]: https://github.com/cockroachdb/cockroach/pull/73735 -[#73744]: https://github.com/cockroachdb/cockroach/pull/73744 -[#73762]: https://github.com/cockroachdb/cockroach/pull/73762 -[#73776]: https://github.com/cockroachdb/cockroach/pull/73776 -[#73802]: https://github.com/cockroachdb/cockroach/pull/73802 -[#73803]: https://github.com/cockroachdb/cockroach/pull/73803 -[#73807]: https://github.com/cockroachdb/cockroach/pull/73807 -[#73832]: https://github.com/cockroachdb/cockroach/pull/73832 -[#73853]: https://github.com/cockroachdb/cockroach/pull/73853 -[#73875]: https://github.com/cockroachdb/cockroach/pull/73875 -[#73910]: https://github.com/cockroachdb/cockroach/pull/73910 -[#73922]: https://github.com/cockroachdb/cockroach/pull/73922 -[#73924]: https://github.com/cockroachdb/cockroach/pull/73924 -[#73928]: https://github.com/cockroachdb/cockroach/pull/73928 -[#73935]: https://github.com/cockroachdb/cockroach/pull/73935 -[#73975]: https://github.com/cockroachdb/cockroach/pull/73975 -[#73978]: https://github.com/cockroachdb/cockroach/pull/73978 -[#73986]: https://github.com/cockroachdb/cockroach/pull/73986 -[#73991]: https://github.com/cockroachdb/cockroach/pull/73991 -[#74102]: https://github.com/cockroachdb/cockroach/pull/74102 -[#74112]: https://github.com/cockroachdb/cockroach/pull/74112 -[#74136]: https://github.com/cockroachdb/cockroach/pull/74136 -[#74138]: https://github.com/cockroachdb/cockroach/pull/74138 -[#74140]: https://github.com/cockroachdb/cockroach/pull/74140 -[#74156]: https://github.com/cockroachdb/cockroach/pull/74156 -[#74157]: https://github.com/cockroachdb/cockroach/pull/74157 -[#74189]: https://github.com/cockroachdb/cockroach/pull/74189 -[#74210]: https://github.com/cockroachdb/cockroach/pull/74210 -[#74222]: https://github.com/cockroachdb/cockroach/pull/74222 -[#74236]: https://github.com/cockroachdb/cockroach/pull/74236 -[#74242]: https://github.com/cockroachdb/cockroach/pull/74242 -[#74281]: https://github.com/cockroachdb/cockroach/pull/74281 -[#74286]: https://github.com/cockroachdb/cockroach/pull/74286 -[#74355]: https://github.com/cockroachdb/cockroach/pull/74355 -[#74365]: https://github.com/cockroachdb/cockroach/pull/74365 -[#74393]: https://github.com/cockroachdb/cockroach/pull/74393 -[#74394]: https://github.com/cockroachdb/cockroach/pull/74394 -[#74408]: https://github.com/cockroachdb/cockroach/pull/74408 -[#74411]: https://github.com/cockroachdb/cockroach/pull/74411 -[#74425]: https://github.com/cockroachdb/cockroach/pull/74425 -[#74491]: https://github.com/cockroachdb/cockroach/pull/74491 -[#74499]: https://github.com/cockroachdb/cockroach/pull/74499 -[#74507]: https://github.com/cockroachdb/cockroach/pull/74507 -[#74582]: https://github.com/cockroachdb/cockroach/pull/74582 -[#74592]: https://github.com/cockroachdb/cockroach/pull/74592 -[#74601]: https://github.com/cockroachdb/cockroach/pull/74601 diff --git a/src/current/_includes/releases/v22.1/v22.1.0-alpha.2.md b/src/current/_includes/releases/v22.1/v22.1.0-alpha.2.md deleted file mode 100644 index 7c6e592ae23..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.0-alpha.2.md +++ /dev/null @@ -1,644 +0,0 @@ -## v22.1.0-alpha.2 - -Release Date: March 7, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Backward-incompatible changes

- -- Non-standard [`cron`](https://wikipedia.org/wiki/Cron) expressions that specify seconds or year fields are no longer supported. [#74881][#74881] -- [Changefeeds](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) will now filter out [virtual computed columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns) from events by default. [#74916][#74916] -- The [environment variable](https://www.cockroachlabs.com/docs/v22.1/cockroach-commands#environment-variables) that controls the max amount of CPU that can be taken by password hash computations during authentication was renamed from `COCKROACH_MAX_BCRYPT_CONCURRENCY` to `COCKROACH_MAX_PW_HASH_COMPUTE_CONCURRENCY`. Its semantics remain unchanged. [#74301][#74301] - -

Security updates

- -- CockroachDB is now able to [authenticate users](https://www.cockroachlabs.com/docs/v22.1/security-reference/authentication) via the DB Console and through SQL sessions when the client provides a cleartext password and the stored credentials are encoded using the SCRAM-SHA-256 algorithm. Support for a SCRAM authentication flow is a separate feature and is not the target of this release note. In particular, for SQL client sessions it makes it possible to use the authentication methods `password` (cleartext passwords), and `cert-password` (TLS client cert or cleartext password) with either CRDB-BCRYPT or SCRAM-SHA-256 stored credentials. Previously, only CRDB-BCRYPT stored credentials were supported for cleartext password authentication. [#74301][#74301] -- The hash method used to encode cleartext passwords before storing them is now configurable, via the new [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `server.user_login.password_encryption`. Its supported values are `crdb-bcrypt` and `scram-sha-256`. The cluster setting only is enabled after all cluster nodes have been upgraded, at which point its default value is `scram-sha-256`. Prior to completion of the upgrade, the cluster behaves as if the cluster setting is set to `crdb-bcrypt` for backward compatibility. Note that the preferred way to populate password credentials for SQL user accounts is to pre-compute the hash client-side, and pass the precomputed hash via [`CREATE USER WITH PASSWORD`](https://www.cockroachlabs.com/docs/v22.1/create-user), [`CREATE ROLE WITH PASSWORD`](https://www.cockroachlabs.com/docs/v22.1/create-role), [`ALTER USER WITH PASSWORD`](https://www.cockroachlabs.com/docs/v22.1/alter-user), or [`ALTER ROLE WITH PASSWORD`](https://www.cockroachlabs.com/docs/v22.1/alter-role). This ensures that the server never sees the cleartext password. [#74301][#74301] -- The cost of the hashing function for `scram-sha-256` is now configurable via the new [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `server.user_login.password_hashes.default_cost.scram_sha_256`. Its default value is 119680, which corresponds to an approximate password check latency of 50-100ms on modern hardware. This value should be increased over time to reflect improvements to CPU performance: the latency should not become so small that it becomes feasible to brute force passwords via repeated login attempts. Future versions of CockroachDB will likely update this default value. [#74301][#74301] -- When using the default HBA authentication method `cert-password` for SQL client connections, and the SQL client does not present a TLS client certificate to the server, CockroachDB now automatically upgrades the password handshake protocol to use SCRAM-SHA-256 if the user's stored password uses the SCRAM encoding. The previous behavior of requesting a cleartext password is still used if the stored password is encoded using the CRDB-BCRYPT format. An operator can force clients to _always_ request SCRAM-SHA-256 when a TLS client cert is not provided in order to guarantee the security benefits of SCRAM using the authentication methods `cert-scram-sha-256` (either TLS client cert _or_ SCRAM-SHA-256) and `scram-sha-256` (only SCRAM-SHA-256). As in previous releases, mandatory cleartext password authentication can be requested (e.g., for debugging purposes) by using the HBA method `password`. This automatic protocol upgrade can be manually disabled using the new cluster setting `server.user_login.cert_password_method.auto_scram_promotion.enable` and setting it to `false`. Disable automatic protocol upgrades if, for example, certain client drivers are found to not support SCRAM-SHA-256 authentication properly. [#74301][#74301] -- In order to promote a transition to SCRAM-SHA-256 for password authentication, CockroachDB now automatically attempts to convert stored password hashes to SCRAM-SHA-256 after a cleartext password authentication succeeds if the target hash method configured via `server.user_login.password_encryption` is `scram-sha-256`. This auto-conversion can happen either during SQL logins or HTTP logins that use passwords, whichever occurs first. When an auto-conversion occurs, a structured event of type `password_hash_converted` is logged to the `SESSIONS` channel. The `PKBDF2` iteration count on the hash is chosen in order to preserve the latency of client logins, to remain similar to the latency incurred from the starting `bcrypt` cost. (For example, the default configuration of `bcrypt` cost 10 is converted to a SCRAM iteration count of 119680.) This choice, however, lowers the cost of brute forcing passwords for an attacker with access to the encoded password hashes, if they have access to ASICs or GPUs, by a factor of ~10. For example, if it would previously cost them $1,000,000 to brute force a `crdb-bcrypt` hash, it would now cost them "just" $100,000 to brute force the SCRAM-SHA-256 hash that results from this conversion. If an operator wishes to compensate for this, three options are available: - 1. Set up their infrastructure such that only passwords with high entropy can be used. For example, this can be achieved by disabling the ability of end-users to select their own passwords and auto-generating passwords for the user, or enforcing some entropy checks during password selection. This way, the entropy of the password itself compensates for the lower hash complexity. - 1. Manually select a higher `SCRAM` iteration count. This can be done either by pre-computing `SCRAM` hashes client-side and providing the pre-computed hash using `ALTER USER WITH PASSWORD`, or adjusting the cluster setting `server.user_login.password_hashes.default_cost.scram_sha_256` and asking CockroachDB to recompute the hash. - 1. Disable the auto-conversion of `crdb-bcrypt` hashes to `scram-sha-256` altogether, using the new cluster setting `server.user_login.upgrade_bcrypt_stored_passwords_to_scram.enabled`. This approach is discouraged as it removes the other security protections offered by SCRAM authentication. The conversion also only happens if the target configured method via `server.user_login.password_encryption` is `scram-sha-256`, because the goal of the conversion is to move clusters towards using SCRAM. [#74301][#74301] -- Added support for [query cancellation](https://www.cockroachlabs.com/docs/v22.1/cancel-query) via the `pgwire` protocol. Since this protocol is unauthenticated, there are a few precautions included. - 1. The protocol requires that a 64-bit key is used to uniquely identify a session. Some of these bits are used to identify the CockroachDB node that owns the session. The rest of the bits are all random. If the node ID is small enough, then only 12 bits are used for the ID, and the remaining 52 bits are random. Otherwise, 32 bits are used for both the ID and the random secret. - 1. A fixed per-node rate limit is used. There can only be at most 256 failed cancellation attempts per second. Any other cancel requests that exceed this rate are ignored. This makes it harder for an attacker to guess random cancellation keys. Specifically, if we assume a 32-bit secret and 256 concurrent sessions on a node, it would take 2^16 seconds (about 18 hours) for an attacker to be certain they have cancelled a query. - 1. No response is returned for a cancel request. This makes it impossible for an attacker to know if their guesses are working. Unsuccessful attempts are [logged internally](https://www.cockroachlabs.com/docs/v22.1/logging-use-cases#security-and-audit-monitoring) with warnings. Large numbers of these messages could indicate malicious activity. [#67501][#67501] -- The cluster setting `server.user_login.session_revival_token.enabled` has been added. It is `false` by default. If set to `true`, then a new token-based authentication mechanism is enabled. A token can be generated using the `crdb_internal.create_session_revival_token` built in [function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators). The token has a lifetime of 10 minutes and is cryptographically signed to prevent spoofing and brute forcing attempts. When initializing a session later, the token can be presented in a `pgwire` `StartupMessage` with a parameter name of `crdb:session_revival_token_base64`, with the value encoded in `base64`. If this parameter is present, all other authentication checks are disabled, and if the token is valid and has a valid signature, the user who originally generated the token authenticates into a new SQL session. If the token is not valid, then authentication fails. The token does not have use-once semantics, so the same token can be used any number of times to create multiple new SQL sessions within the 10 minute lifetime of the token. As such, the token should be treated as highly sensitive cryptographic information. This feature is meant to be used by multi-tenant deployments to move a SQL session from one node to another. It requires the presence of a valid `Ed25519` keypair in `tenant-signing..crt` and `tenant-signing..key`. [#75660][#75660] -- When the `sql.telemetry.query_sampling.enabled` cluster setting is enabled, SQL names and client IPs are no longer redacted in telemetry logs. [#76676][#76676] - -

General changes

- -- The following metrics were added for observability of cancellation requests made using the PostgreSQL wire protocol: - - `sql.pgwire_cancel.total` - - `sql.pgwire_cancel.ignored` - - `sql.pgwire_cancel.successful` - - The metrics are all counters. The `ignored` counter is incremented if a cancel request was ignored due to exceeding the per-node rate limit of cancel requests. [#76457][#76457] -- Documentation was added describing how jobs and scheduled jobs functions and are used in CockroachDB [#73995][#73995] - -

Enterprise edition changes

- -- Client certificates may now be provided for the `webhook` [changefeed sink](https://www.cockroachlabs.com/docs/v22.1/changefeed-sinks). [#74645][#74645] -- CockroachDB now redacts more potentially sensitive URI elements from changefeed job descriptions. This is a breaking change for workflows that copy URIs. As an alternative, the unredacted URI may be accessed from the jobs table directly. [#75174][#75174] -- Changefeeds now outputs the topic names created by the Kafka sink. Furthermore, these topic names will be displayed in the [`SHOW CHANGEFEED JOBS`](https://www.cockroachlabs.com/docs/v22.1/show-jobs#show-changefeed-jobs) query. [#75223][#75223] -- [Backup and restore](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups) jobs now allow encryption/decryption with GCS KMS [#75750][#75750] -- [Kafka sinks](https://www.cockroachlabs.com/docs/v22.1/changefeed-sinks#kafka) support larger messages, up to 2GB in size. [#76265][#76265] -- Added support for a new SQL statement called `ALTER CHANGEFEED`, which allows users to add/drop targets for an existing changefeed. The syntax of the statement is: `{% raw %}ALTER CHANGEFEED {{ADD|DROP} }...{% endraw %}` - - There can be an arbitrary number of `ADD` or `DROP` commands in any order. For example: - - ~~~ sql - ALTER CHANGEFEED 123 ADD foo,bar DROP baz; - ~~~ - - With this statement, users can avoid going through the process of altering a changefeed on their own, and rely on CockroachDB to carry out this task. [#75737][#75737] -- Changefeeds running on tables with a low [`gc.ttlseconds`](https://www.cockroachlabs.com/docs/v22.1/configure-replication-zones#gc-ttlseconds) value now function more reliably due to protected timestamps being maintained for the changefeed targets at the resolved timestamp of the changefeed. The frequency at which the protected timestamp is updated to the resolved timestamp can be configured through the `changefeed.protect_timestamp_interval` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). If the changefeed lags too far behind such that storage of old data becomes an issue, cancelling the changefeed will release the protected timestamps and allow garbage collection to resume. If `protect_data_from_gc_on_pause` is unset, pausing the changefeed will release the existing protected timestamp record. [#76605][#76605] -- Added support to the `ALTER CHANGEFEED` statement so that users can edit and unset the options of an existing changefeed. The syntax of this addition is the following: - - ~~~ sql - ALTER CHANGEFEED SET UNSET - ~~~ - - [#76583][#76583] -- Users may now alter the sink URI of an existing changefeed. This can be achieved by executing `ALTER CHANGEFEED SET sink = ''` where the sink type of the new sink must match the sink type of the old sink that was chosen at the creation of the changefeed. [#77043][#77043] - -

SQL language changes

- -- `CHECK` constraints on the shard column used by [hash-sharded indexes](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) are no longer printed in the corresponding `SHOW CREATE TABLE`. The constraint had been shown because CockroachDB lacked logic to ensure that shard columns which are part of hash-sharded indexes always have the check constraint which the optimizer relies on to achieve properly optimized plans on hash-sharded indexes. The constraint is now implied by the `USING HASH` clause on the relevant index. [#74179][#74179] -- The experimental command `SCRUB PHYSICAL` is no longer implemented. [#74761][#74761] -- The [`CREATE MATERIALIZED VIEW`](https://www.cockroachlabs.com/docs/v22.1/views#materialized-views) statement now supports `WITH DATA`. [#74821][#74821] -- CockroachDB now has a `crdb_internal.replication_stream_spec` [function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) for stream replication. [#73886][#73886] -- CockroachDB has a new [role](https://www.cockroachlabs.com/docs/v22.1/show-roles) `VIEWACTIVITYREDACTED` introduced in v21.2.5 that is similar to `VIEWACTIVITY` but restricts the use of [statement diagnostics bundles](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#diagnostics). It is possible for a user to have both roles (`VIEWACTIVITY` and `VIEWACTIVITYREDACTED`), but `VIEWACTIVITYREDACTED` takes precedence. [#74715][#74715] -- In v21.2.5 CockroachDB added support for the `ON CONFLICT ON CONSTRAINT` form of [`INSERT ON CONFLICT`](https://www.cockroachlabs.com/docs/v22.1/insert#on-conflict-clause). This form is added for compatibility with PostgreSQL. It permits explicitly selecting an arbiter index for `INSERT ON CONFLICT`, rather than inferring one using a column list, which is the default behavior. [#73460][#73460] -- [Imports](https://www.cockroachlabs.com/docs/v22.1/import) now check readability earlier for multiple files to fail more quickly if, for example, permissions are invalid. [#74863][#74863] -- In v21.2.5 CockroachDB added new roles, `NOSQLLOGIN` and its inverse `SQLLOGIN`, which controls the SQL login ability for a user while retaining their ability to login to the [DB Console](https://www.cockroachlabs.com/docs/v22.1/ui-overview) (as opposed to `NOLOGIN` which restricts both SQL and DB Console access). Without any role options all login behavior is permitted. OIDC logins to the DB Console continue to be permitted with `NOSQLLOGIN` set. [#74706][#74706] -- Added the `default_table_access_method` [session variable](https://www.cockroachlabs.com/docs/v22.1/show-vars), which only takes in `heap`, to match the behavior of PostgreSQL. [#74774][#74774] -- The [distributed plan diagram](https://www.cockroachlabs.com/docs/v22.1/explain-analyze#statement-plan-tree-properties) now lists scanned column names for `TableReaders`. [#75114][#75114] -- Users can now specify the owner when [creating a database](https://www.cockroachlabs.com/docs/v22.1/create-database), similar to PostgreSQL: `CREATE DATABASE name [ [ WITH ] [ OWNER [=] user_name ]` [#74867][#74867] -- The [`CREATE ROLE`](https://www.cockroachlabs.com/docs/v22.1/create-role) and [`ALTER ROLE`](https://www.cockroachlabs.com/docs/v22.1/alter-role) statements now accept password hashes computed using the `scram-sha-256` method. For example: `CREATE USER foo WITH PASSWORD 'SCRAM-SHA-256$4096:B5VaT...'`. As for other types of pre-hashed passwords, this auto-detection can be disabled by changing the cluster setting `server.user_login.store_client_pre_hashed_passwords.enabled` to `false`. To ascertain whether a `scram-sha-256` password hash will be recognized, orchestration code can use the [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) `crdb_internal.check_password_hash_format()`. Follow these steps to encode the SCRAM-SHA-256 password: - 1. Get the cleartext password string. - 1. Generate a salt, iteration count, stored key and server key according to [RFC 5802](https://datatracker.ietf.org/doc/html/rfc5802). - 1. Encode the hash into a format recognized by CockroachDB: the string `SCRAM-SHA-256$`, followed by the iteration count, followed by `:`, followed by the base64-encoded salt, followed by `$`, followed by the base-64 stored key, followed by `:`, followed by the base-64 server key. [#74301][#74301] -- The session variable `password_encryption` is now exposed to SQL clients. Note that SQL clients cannot modify its value directly, it is only configurable via a cluster setting. [#74301][#74301] -- When possible, CockroachDB will now automatically require the PostgreSQL-compatible SCRAM-SHA-256 protocol when performing password validation when SQL client login. This mechanism is not used when SQL clients use TLS client certs, which is the recommended approach. This assumes support for SCRAM-SHA-256 in client drivers. As of 2020, SCRAM-SHA-256 is prevalent in the PostgreSQL driver ecosystem. However, users should be mindful of the following possible behavior changes: - - An application that tries to detect whether password verification has failed by checking server error messages, might observe different error messages with SCRAM-SHA-256. Those checks, if present, need to be updated. - - If a client driver simply does not support SCRAM-SHA-256 at all, the operator retains the option to set the cluster setting `server.user_login.cert_password_method.auto_scram_promotion.enable` to `false` to force the previous password verification method instead. [#74301][#74301] -- After a cluster upgrade, the first time a SQL client logs in using password authentication, the password will be converted to a new format (`scram-sha-256`) if it was encoded with `crdb-bcrypt` previously. This conversion will increase the latency of that initial login by a factor of ~2x, but it will be reduced again after the conversion completes. If login latency is a concern, operators should perform the password conversion ahead of time, by computing new `SCRAM` hashes for the clients via [`ALTER USER WITH PASSWORD`](https://www.cockroachlabs.com/docs/v22.1/alter-user) or [`ALTER ROLE WITH PASSWORD`](https://www.cockroachlabs.com/docs/v22.1/alter-role). This conversion can also be disabled via the new cluster setting `server.user_login.upgrade_bcrypt_stored_passwords_to_scram.enabled`. [#74301][#74301] -- Statements are no longer formatted prior to being sent to the UI, but the new built-in function remains. [#75443][#75443] -- The default SQL statistics flush interval is now 10 minutes. A new cluster setting `sql.stats.aggregatinon.interval` controls the aggregation interval of SQL stats, with a default value of 1 hour. [#74831][#74831] -- [`SELECT`](https://www.cockroachlabs.com/docs/v22.1/selection-queries), [`INSERT`](https://www.cockroachlabs.com/docs/v22.1/insert), [`DELETE`](https://www.cockroachlabs.com/docs/v22.1/delete), and [`UPDATE`](https://www.cockroachlabs.com/docs/v22.1/update) can no longer be granted or revoked on databases. Previously `SELECT`, `INSERT`, `DELETE`, and `UPDATE` would be converted to `ALTER DEFAULT PRIVILEGES` on `GRANT`s and were revocable. [#72665][#72665] -- Added `pgcodes` to errors when an invalid storage parameter is passed. [#75262][#75262] -- Implemented the [`ALTER TABLE ... SET (...)`](https://www.cockroachlabs.com/docs/v22.1/alter-table) syntax. We do not support any storage parameters yet, so this statement does not change the schema. [#75262][#75262] -- [`SHOW GRANTS ON TABLE`](https://www.cockroachlabs.com/docs/v22.1/show-grants) now includes the `is_grantable` column [#75226][#75226] -- Implemented the [`ALTER TABLE ... RESET (...)`](https://www.cockroachlabs.com/docs/v22.1/alter-table) syntax. This statement currently does not change the schema. [#75429][#75429] -- S3 URIs used for [`BACKUP`](https://www.cockroachlabs.com/docs/v22.1/backup), [`EXPORT`](https://www.cockroachlabs.com/docs/v22.1/export), or [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v22.1/create-changefeed) can now include the query parameter `S3_STORAGE_CLASS` to configure the storage class used when that job creates objects in the designated S3 bucket. [#75588][#75588] -- The [cost based optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) now modifies the query cost based on the `avg_size` table statistic, which may change query plans. This is controlled by the [session variable](https://www.cockroachlabs.com/docs/v22.1/set-vars) `cost_scans_with_default_col_size`, and can be disabled by setting it to `true`: `SET cost_scans_with_default_col_size=true`. [#74551][#74551] -- The [`crdb_internal.jobs`](https://www.cockroachlabs.com/docs/v22.1/crdb-internal) table now has a new column `execution_events` which is a structured JSON form of `execution_errors`. [#75556][#75556] -- The [privileges](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization) reported in `information_schema.schema_privileges` for non-user-defined schemas no longer are inferred from the privileges on the parent database. Instead, virtual schemas (like `pg_catalog` and `information_schema`) always report the `USAGE` privilege for the public role. The `pg_temp` schema always reports `USAGE` and `CREATE` privileges for the public role. [#75628][#75628] -- Transaction ID to transaction fingerprint ID mapping is now stored in the new transaction ID cache, a FIFO unordered in-memory buffer. The size of the buffer is 64 MB by default and configurable via `sql.contention.txn_id_cache.max_size` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). Consequentially, two additional metrics are introduced: - - `sql.contention.txn_id_cache.size`: the current memory usage of transaction ID cache. - - `sql.contention.txn_id_cache.discarded_count`: the number of resolved transaction IDs that are dropped due to memory constraints. [#74115][#74115] -- Added new [built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) called `crdb_internal.revalidate_unique_constraint`, `crdb_internal.revalidate_unique_constraints_in_table`, and `crdb_internal.revalidate_unique_constraints_in_all_tables`, which can be used to revalidate existing unique constraints. The different variations support validation of a single constraint, validation of all unique constraints in a table, and validation of all unique constraints in all tables in the current database, respectively. If any constraint fails validation, the functions will return an error with a hint about which data caused the constraint violation. These violations can then be resolved manually by updating or deleting the rows in violation. This will be useful to users who think they may have been affected by issue [#73024](https://github.com/cockroachdb/cockroach/issues/73024). [#75548][#75548] -- The [`SHOW GRANTS ON SCHEMA`](https://www.cockroachlabs.com/docs/v22.1/show-grants) statement now includes the `is_grantable` column [#75722][#75722] -- CockroachDB now disallows [type casts](https://www.cockroachlabs.com/docs/v22.1/scalar-expressions#explicit-type-coercions) from [`ENUM`](https://www.cockroachlabs.com/docs/v22.1/enum) to [`BYTES`](https://www.cockroachlabs.com/docs/v22.1/bytes). [#75816][#75816] -- [`EXPORT PARQUET`](https://www.cockroachlabs.com/docs/v22.1/export) has a new `compression` option whose value can be `gzip` or `snappy`. An example query: - - ~~~ sql - EXPORT INTO PARQUET 'nodelocal://0/compress_snappy' WITH compression = snappy FROM SELECT * FROM foo - ~~~ - - By default, the Parquet file will be uncompressed. With compression, the file name will be `.parquet.gz` or `.parquet.snappy`. [#74661][#74661] - -- Setting a UTC timezone offset of greater than 167 or less than -167 now returns an error. For example: - - ~~~ sql - SET TIME ZONE '168' - ~~~ - - Gives error: - - ~~~ - invalid value for parameter "timezone": "'168'": cannot find time zone "168": UTC timezone offset is out of range. - ~~~ - - ~~~ sql - SET TIME ZONE '-168' - ~~~ - - Gives error: - - ~~~ - invalid value for parameter "timezone": "'-168'": cannot find time zone "-168": UTC timezone offset is out of range. - ~~~ - - [#75822][#75822] - -- The [`RESET ALL`](https://www.cockroachlabs.com/docs/v22.1/reset-vars) statement was added, which resets the values of all [session variables](https://www.cockroachlabs.com/docs/v22.1/show-vars#supported-variables) to their default values. [#75804][#75804] -- The [`SHOW GRANTS ON DATABASE`](https://www.cockroachlabs.com/docs/v22.1/show-grants) statement now includes the `is_grantable` column [#75854][#75854] -- Reordered unimplemented tables in [`pg_catalog`](https://www.cockroachlabs.com/docs/v22.1/pg-catalog) and `information_schema` to match PostgreSQL. [#75461][#75461] -- CockroachDB will now remove incompatible database privileges to be consistent with PostgreSQL. Existing [`SELECT`](https://www.cockroachlabs.com/docs/v22.1/selection-queries), [`INSERT`](https://www.cockroachlabs.com/docs/v22.1/insert), [`UPDATE`](https://www.cockroachlabs.com/docs/v22.1/update), and [`DELETE`](https://www.cockroachlabs.com/docs/v22.1/delete) privileges on databases will be converted to the equivalent default privileges. [#75562][#75562] -- CockroachDB now allows users who do not have `ADMIN` privileges to use `SHOW RANGES` if the `ZONECONFIG` privilege is granted to the user. [#75551][#75551] -- The `WITH (param=value)` syntax is now allowed for [primary key](https://www.cockroachlabs.com/docs/v22.1/primary-key) definitions, to align with PostgreSQL and to support `WITH (bucket_count=...)` syntax for [hash-sharded indexes](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes). [#75971][#75971] -- CockroachDB now aliases the `idle_session_timeout` session variable with the `idle_in_session_timeout` variable to align with PostgreSQL. [#76002][#76002] -- The `SHOW GRANTS ON TYPE` now includes the `is_grantable` column [#75957][#75957] -- The `bucket_count` storage parameter was added. To create hash-sharded indexes, you can use the new syntax: `USING HASH WITH (bucket_count=xxx)`. The `bucket_count` storage parameter can only be used with `USING HASH`. The old `WITH BUCKET_COUNT=xxx` syntax is still supported for backward compatibility. However, you can only use the old or new syntax, but not both. An error is returned for mixed clauses: `USING HASH WITH BUCKET_COUNT=5 WITH (bucket_count=5)`. [#76068][#76068] -- The `bulkio.backup.merge_file_buffer_size` cluster setting default value has been changed from 16MiB to 128MiB. This value determines the maximum byte size of SSTs that we buffer before forcing a flush during a backup. [#75988][#75988] -- CockroachDB now supports for the `bucket_count` storage parameter syntax, and should be used over the old `WITH BUCKET_COUNT=xxx` syntax. With this change, CockroachDB outputs the new syntax in [`SHOW CREATE`](https://www.cockroachlabs.com/docs/v22.1/show-create) statements. [#76112][#76112] -- CockroachDB now saves statement plan hashes or [gists](https://www.cockroachlabs.com/docs/v22.1/crdb-internal#detect-suboptimal-and-regressed-plans) to the Statements persisted stats inside the Statistics column. [#75762][#75762] -- PostgreSQL error codes were added to the majority of [spatial functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#spatial-functions). [#76129][#76129] -- Performing a `BACKUP` on ranges containing extremely large numbers of revisions to a single row no longer fails with errors related to exceeding the size limit. [#76254][#76254] -- The default bucket count for hash-sharded index is 16. [#76115][#76115] -- CockroachDB now filters out internal statements and transactions from UI timeseries metrics. [#75815][#75815] -- [`EXPORT PARQUET`](https://www.cockroachlabs.com/docs/v22.1/export) now supports all data types that Avro changefeeds support. Below are the data type conversions from CockroachDB to Parquet. To maintain backward compatibility with older Parquet readers, Parquet converted types were also annotated. To learn about more about Parquet data representation, [see the Parquet docs](https://github.com/apache/parquet-format). - - CockroachDB Type Family -> Parquet Type | Parquet Logical Type | Parquet Converted Type - --|---|-- - Bool -> boolean | nil | nil - String -> byte array | string | string - Collated String -> byte array | string| string - INet -> byte array | string | string - JSON -> byte array | json | json - Int (oid.T_int8) -> int64 | int64 | int64 - Int (oid.T_int4 or oid.T_int2) -> int32 | int32 | int32 - Float -> float64 | nil | nil - Decimal -> byte array | decimal (Note: scale and precision data are preserved in the parquet file) | decimal - Uuid -> fixed length byte array (16 bytes) | uuid | no converted type - Bytes -> byte array | nil | nil - Bit -> byte array | nil | nil - Enum -> byte array | Enum | Enum - Box2d -> byte array | string | string - Geography -> byte array | nil | nil - Geometry -> byte array | nil | nil - Date -> byte array | string | string - Time -> int64 | time (note: microseconds since midnight) | time - TimeTz -> byte array | string | string - Interval -> byte array | string (specifically represented as ISO8601) | string - Timestamp -> byte array | string | string - TimestampTz -> byte array | string | string - Array -> encoded as a repeated field and each array value gets encoded by pattern described above. | List | List - - [#75890][#75890] - -- [`SHOW CREATE TABLE`](https://www.cockroachlabs.com/docs/v22.1/show-create) no longer shows the `FAMILY` clause if there is only the `PRIMARY` family clause. [#76285][#76285] -- CockroachDB now records the approximate time when an index was created it. This information is exposed via a new `NULL`-able `TIMESTAMP` column, `created_at`, on [`crdb_internal.table_indexes`](https://www.cockroachlabs.com/docs/v22.1/crdb-internal). [#75753][#75753] -- Added support for query cancellation via the `pgwire` protocol. CockroachDB will now respond to a `pgwire` cancellation by forwarding the request to the node that is running a particular query. That node will then cancel the query that is currently running in the session identified by the cancel request. The cancel request is made through the `pgwire` protocol when initializing a new connection. The client must first send 32 bits containing the integer 80877102, followed immediately by the 64-bit `BackendKeyData` message that the server sent to the client when the session was started. Most PostgreSQL drivers handle this protocol already, so there's nothing for the end-user to do apart from calling the `cancel` function that their driver offers. See the [PostgreSQL docs](https://www.postgresql.org/docs/13/protocol-flow.html#id-1.10.5.7.9) for more information. [#67501][#67501] -- Refactored the [`BACKUP`](https://www.cockroachlabs.com/docs/v22.1/backup), [`SHOW BACKUP`](https://www.cockroachlabs.com/docs/v22.1/show-backup), and [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore) `incremental_storage` option to `incremental_location`. [#76416][#76416] -- Restored data now appears to have been written at the time it was restored, rather than the time at which it was backed up, when reading the lower-level write timestamps from the rows themselves. This affects various internal operations and the result of `crdb_internal_mvcc_timestamp`. [#76271][#76271] -- The built-in functions `crdb_internal.force_panic`, `crdb_internal.force_log_fatal`, `crdb_internal.set_vmodule`, `crdb_internal.get_vmodule` are now available to all `admin` users, not just `root`. [#76518][#76518] -- `BACKUP` of a table marked with `exclude_data_from_backup` via `ALTER TABLE ... SET (exclude_data_from_backup = true)` will no longer backup that table's row data. The backup will continue to backup the table's descriptor and related metadata, and so on restore we will end up with an empty version of the backed up table. [#75451][#75451] -- Failed [`DROP INDEX`](https://www.cockroachlabs.com/docs/v22.1/drop-index) schema changes are no longer rolled back. Rolling back a failed `DROP INDEX` requires the index to be rebuilt, a potentially long-running, expensive operation. Further, in previous versions, such rollbacks were already incomplete as they failed to roll back cascaded drops for dependent views and foreign key constraints. [#75727][#75727] -- Fixed a bug where when `sql.contention.txn_id_cache.max_size` was set to 0, it would effectively turn off the transaction ID cache. [#76523][#76523] -- CockroachDB now allows users to add `NEW_KMS` encryption keys to existing backups using: `ALTER BACKUP ADD NEW_KMS = WITH OLD_KMS = ; ALTER BACKUP IN ADD NEW_KMS = WITH OLD_KMS = ` The `OLD_KMS` value must refer to at least one KMS URI that was previously used to encrypt the backup. Following successful completion of the `ALTER BACKUP`, subsequent backups, restore and show commands can use any of old or new KMS URIs to decrypt the backup. [#75900][#75900] -- [Primary key](https://www.cockroachlabs.com/docs/v22.1/primary-key) columns which are not part of a unique secondary index (but are "implicitly" included because all indexes include all primary key columns) are now marked as `storing` in the `information_schema.statistics` table and in `SHOW INDEX`. This is technically more correct; the column is in the value in KV and not in the indexed key. [#72670][#72670] -- A special flavor of `RESTORE`, `RESTORE SYSTEM USERS FROM ...`, was added to support restoring system users from a backup. When executed, the statement recreates those users which are in a backup of `system.users` but do not currently exist (ignoring those who do) and re-grant roles for users if the backup contains system.role_members. [#71542][#71542] -- Added support for `DECLARE`, `FETCH`, and `CLOSE` commands for creating, using, and deleting [SQL cursors](https://www.cockroachlabs.com/docs/v22.1/cursors). [#74006][#74006] -- [SQL cursors](https://www.cockroachlabs.com/docs/v22.1/cursors) now appear in `pg_catalog.pg_cursors`. [#74006][#74006] -- CockroachDB now turns on support for hash-sharded indexes in implicit partitioned tables. Previously, CockroachDB blocked users from creating hash-sharded indexes in all kinds of partitioned tables including implicit partitioned tables using `PARTITION ALL BY` or `REGIONAL BY ROW`. Primary keys cannot be hash-sharded if a table is explicitly partitioned with `PARTITION BY` or an index cannot be hash-sharded if the index is explicitly partitioned with `PARTITION BY`. Partitioning columns cannot be placed explicitly as key columns of a hash-sharded index, including regional-by-row table's `crdb_region` column. [#76358][#76358] -- When a hash-sharded index is partitioned, ranges are now pre-split within every single possible partition on shard boundaries. Each partition is split up to 16 ranges, otherwise split into the number bucket count ranges. Note that, only the list partition is being pre-split. CockroachDB doesn't pre-split range partitions. [#76358][#76358] -- New user privileges were added: `VIEWCLUSTERSETTING` and `NOVIEWCLUSTERSETTING` that controls whether users can view cluster settings only. [#76012][#76012] -- Several error cases in geospatial and other built-in functions now return more appropriate error codes. [#76458][#76458] -- [Expression indexes](https://www.cockroachlabs.com/docs/v22.1/expression-indexes) can no longer have duplicate expressions. [#76863][#76863] -- The `crdb_internal.serialize_session` and `crdb_internal.deserialize_session` functions now handle prepared statements. When deserializing, any prepared statements that existed when the session was serialized are re-prepared. Re-preparing a statement if the current session already has a statement with that name throws an error. [#76399][#76399] -- The `experimental_enable_hash_sharded_indexes` session variable was removed, along with the corresponding cluster setting. The ability to create hash-sharded indexes is enabled automatically. SQL statements that refer to the setting will still work but will have no effect. [#76937][#76937] -- Added the session variable `default_transaction_quality_of_service` which controls the priority of work submitted to the different [admission control](https://www.cockroachlabs.com/docs/v22.1/admission-control) queues on behalf of SQL requests submitted in a session. Admission control must be enabled for this setting to have an effect. To increase admission control priority of subsequent SQL requests: - - ~~~ sql - SET default_transaction_quality_of_service=critical; - ~~~ - - To decrease admission control priority of subsequent SQL requests: - - ~~~ sql - SET default_transaction_quality_of_service=background; - ~~~ - - To reset admission control priority to the default session setting (in between background and critical): - - ~~~ sql - SET default_transaction_quality_of_service=regular; - ~~~ - - [#76512][#76512] -- CockroachDB now limits the bucket count in [hash-sharded indexes](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) to an inclusive range of [2, 2048]. Previously we only required the bucket count a positive Int32 integer (greater than 1). [#77004][#77004] -- Added support for distributed import queries in multi-tenant environments, which allows import queries to have improved parallelism by utilizing all available SQL pods in the tenant. [#76566][#76566] -- The `ST_Box2DFromGeoHash` function now accepts `NULL` arguments. If the precision is `NULL`, it is equivalent to no precision being passed in. Upper-case characters are now parsed as lower-case characters for `geohash`, matching PostGIS behavior. [#76990][#76990] -- CockroachDB now supports the `SHOW COMPLETIONS AT OFFSET FOR ` syntax that returns a set of SQL keywords that can complete the keyword at `` in the given ``. If the offset is in the middle of a word, then it returns the full word. For example `SHOW COMPLETIONS AT OFFSET 1 FOR "SELECT"` returns `select`. [#72925][#72925] -- A new row level TTL was added to CockroachDB, which is available as a beta feature. This allows users to use a special syntax to automatically mark rows for deletion. Rows are deleted using a `SCHEDULED JOB`. - - A user can create a table with TTL using: - - ~~~ sql - CREATE TABLE t (id INT PRIMARY KEY) WITH (ttl_expire_after = '10 mins') - ~~~ - - Where `ttl_expire_after` is a [duration expression](https://www.cockroachlabs.com/docs/v22.1/interval). A user can also add TTL to an existing table using: - - ~~~ sql - ALTER TABLE t SET (ttl_expire_after = '10 mins') - ~~~ - - This creates a new column, `crdb_internal_expiration`, which automatically is set to `now() + ttl_expire_after` when inserted by default or on update. The scheduled job will delete any rows which exceed this timestamp as of the beginning of the job run. The TTL job is configurable in a few ways using the `WITH`/`SET` syntax: - - - `ttl_select_batch_size`: how many rows to select at once (default is cluster setting `sql.ttl.default_select_batch_size`) - - `ttl_delete_batch_size`: how many rows to delete at once (default is cluster setting `sql.ttl.default_select_batch_size`) - - `ttl_delete_rate_limit`: maximum rows to delete per second for the given table (default is cluster setting `sql.default.default_delete_rate_limit`) - - `ttl_pause`: pauses the TTL job (also globally pausable with `sql.ttl.job.enabled`). - - Using `ALTER TABLE table_name RESET ()` will reset the parameter to re-use the default, or `RESET(ttl)` will disable the TTL job for the table and remove the `crdb_internal_expiration` column. [#76918][#76918] - -- Added the cluster setting `sql.contention.event_store.capacity`. This cluster setting can be used to control the in-memory capacity of the contention event store. When this setting is set to zero, the contention event store is disabled. [#76719][#76719] -- When dropping a user that has default privileges, the error message now includes which database and schema in which the default privileges are defined. Additionally a hint is given to show exactly how to remove the default privileges. For example: - - ~~~ - pq: role testuser4 cannot be dropped because some objects depend on it owner of default privileges on new sequences belonging to role testuser4 in database testdb2 in schema s privileges for default privileges on new sequences belonging to role testuser3 in database testdb2 in schema s privileges for default privileges on new sequences for all roles in database testdb2 in schema public privileges for default privileges on new sequences for all roles in database testdb2 in schema s HINT: USE testdb2; ALTER DEFAULT PRIVILEGES FOR ROLE testuser4 IN SCHEMA S REVOKE ALL ON SEQUENCES FROM testuser3; USE testdb2; ALTER DEFAULT PRIVILEGES FOR ROLE testuser3 IN SCHEMA S REVOKE ALL ON SEQUENCES FROM testuser4; USE testdb2; ALTER DEFAULT PRIVILEGES FOR ALL ROLES IN SCHEMA PUBLIC REVOKE ALL ON SEQUENCES FROM testuser4; USE testdb2; ALTER DEFAULT PRIVILEGES FOR ALL ROLES IN SCHEMA S REVOKE ALL ON SEQUENCES FROM testuser4; - ~~~ - - [#77016][#77016] -- Added support for distributed backups in a multitenant environment that uses all available SQL pods in the tenant. [#77023][#77023] - -

Operational changes

- -- Sending a CockroachDB process, including one running a client command, a `SIGUSR2` signal now causes it to open an HTTP port that serve the basic Go performance inspection endpoints for use with `pprof`. [#75678][#75678] -- Operators who wish to access HTTP endpoints of the cluster through a proxy can now request specific `nodeID`s through a `remote_node_id` query parameter or cookie with the value set to the `nodeID` to which they would like to proxy the connection. [#72659][#72659] -- Added the `admission.epoch_lifo.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings), disabled by default, which enables the use of epoch-LIFO adaptive queueing behavior in [admission control](https://www.cockroachlabs.com/docs/v22.1/admission-control). [#71882][#71882] -- Added the cluster setting `bulkio.backup.resolve_destination_in_job.enabled` which can be used to delay resolution of backup's destination until the job starts running. [#76670][#76670] -- A `server.max_connections` cluster setting was added to limit the maximum number of connections to a server. It is disabled by default. [#76401][#76401] -- `BACKUP` now resolves incremental backup destinations during the job's execution phase rather than while it is being created to reduce contention on the `system.jobs` table. The `bulkio.backup.resolve_destination_in_job.enabled` cluster setting that enabled this functionality in some v21.2 patch releases was removed. [#76853][#76853] -- Added the cluster setting `kv.raft_log.loosely_coupled_truncation.enabled` which can be used to disable loosely coupled truncation. [#76215][#76215] -- `RESTORE` now runs at a higher parallelism by default to improve performance. [#76907][#76907] -- Added the `admission.epoch_lifo.epoch_duration`, `admission.epoch_lifo.epoch_closing_delta_duration`, `admission.epoch_lifo.queue_delay_threshold_to_switch_to_lifo` cluster settings for configuring epoch-LIFO queueing in admission control. [#76951][#76951] - -

Command-line changes

- -- Fixed the [CLI help](https://www.cockroachlabs.com/docs/v22.1/cockroach-sql) text for `ALTER DATABASE` to show correct options for `ADD REGION` and `DROP REGION`, and include some missing options such as `CONFIGURE ZONE`. [#74929][#74929] -- If graceful drain range lease transfer encounters issues, verbose logging is now automatically enabled to help with troubleshooting. [#68488][#68488] -- All [`cockroach` commands](https://www.cockroachlabs.com/docs/v22.1/cockroach-commands) now log their stack but do not exit when sent a `SIGQUIT` signal. This behavior is consistent with the behavior of `cockroach start`. [#75678][#75678] -- The [`debug zip`](https://www.cockroachlabs.com/docs/v22.1/cockroach-debug-zip) utility now also scrapes the cluster-wide KV replication reports in the output. [#75239][#75239] -- The flag `--self` of the [`cockroach node decommission` command](https://www.cockroachlabs.com/docs/v22.1/cockroach-node) is deprecated. Instead, operators should specify the node ID of the target node as an explicit argument. The node that the command is connected to should not be a target node. [#74319][#74319] -- Added a new optional `version` argument to the `doctor examine` command. This can be used to enable or disable validation when examining older ZIP directories. [#76166][#76166] -- The `debug zip` CLI command now supports exporting `system` and `crdb_internal` tables to a ZIP folder for tenants. [#75572][#75572] -- Added instructions to an error message when initializing `debug tsdump`. [#75880][#75880] -- `cockroach sql` (and [`demo`](https://www.cockroachlabs.com/docs/v22.1/cockroach-demo)) now continue to accept user input when Ctrl+C is pressed at the interactive prompt and the current input line is empty. Previously, it would terminate the shell. To terminate the shell, the client-side command `\q` is still supported. The user can also terminate the input altogether via `EOF` (Ctrl+D). The behavior for non-interactive use remains unchanged. [#76427][#76427] -- The interactive SQL shell (`cockroach sql`, `cockroach demo`) now supports interrupting a currently running query with Ctrl+C, without losing access to the shell. [#76437][#76437] -- Added a new CLI flag `--max-tsdb-memory` used to set the memory budget for timeseries queries when processing requests from the [**Metrics** page in the DB Console](https://www.cockroachlabs.com/docs/v22.1/ui-overview-dashboard). Most users should not need to change this setting as the default of 1% of system memory or 64 MiB, whichever is greater, is adequate for most deployments. In cases where a deployment of hundreds of nodes has low per-node memory available (for example, below 8 GiB) it may be necessary to increase this value to `2%` or higher in order to render time series graphs for the cluster using the DB Console. Otherwise, use the default settings. [#74662][#74662] -- Node drains now ensure that SQL statistics are not lost during the process, but are now preserved in the statement statistics system table. [#76397][#76397] -- The CLI now auto completes on tab by using `SHOW COMPLETIONS AT OFFSET`. [#72925][#72925] - -

API endpoint changes

- -- The `/_status/load` endpoint, which delivers an instant measurement of CPU load, is now available for regular CockroachDB nodes and not just multitenant SQL-only servers. [#75852][#75852] -- The `StatusClient` interface has been extended with a new request called `NodesListRequest`. This request returns a list of KV nodes for KV servers and SQL nodes for SQL only servers with their corresponding SQL and RPC addresses. [#75572][#75572] -- Users with the `VIEWACTIVITYREDACTED` [role](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization) will not have access to the full queries with constants in the `ListSessions` response. [#76675][#76675] - -

DB Console changes

- -- Removed `$ internal` as one of the apps options under the [**Statements**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [**Transactions**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) page filters. [#75470][#75470] -- Removed formatting of statements on the [**Statements**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-details-page), [**Transactions**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page#transaction-details-page), and **Index** details pages. [#75443][#75443] -- Changed the order of tabs under the **SQL Activity** page to be **Statements**, **Transactions**, and [**Sessions**](https://www.cockroachlabs.com/docs/v22.1/ui-sessions-page). [#75490][#75490] -- The logical plan text is now included in searchable text in the **Statements** page. [#75097][#75097] -- If the user has the role `VIEWACTIVITYREDACTED`, we now hide the Statement Diagnostics bundle info on **Statements** page (diagnostics column), **Statement Details** page (diagnostics tab) and [**Advanced Debug**](https://www.cockroachlabs.com/docs/v22.1/ui-debug-pages) page (diagnostics history). [#75274][#75274] -- Loading and error pages are now below page config on the **Transactions** and **Statements** pages. This was introduced in CockroachDB v21.2.5. [#75458][#75458] -- Added `Circuit Breaker` graphs on the **Replication Dashboard** in the DB Console. This was introduced in CockroachDB v21.2.5. [#75613][#75613] -- Added an option to cancel a running request for statement diagnostics. [#75733][#75733] -- DB Console requests can now be routed to arbitrary nodes in the cluster. Users can select a node from a dropdown in the **Advanced Debug** page of the DB Console to route their UI to that node. Manually initiated requests can either add a `remote_node_id` query parameter to their request or set a `remote_node_id` HTTP cookie in order to manage the routing of their request. [#72659][#72659] -- We no longer show information about aggregation timestamps on the **Statements** and **Statement Details** pages, since now all the statement fingerprints are grouped inside the same time selection. [#76301][#76301] -- Added the status of automatic statistics collection to the [**Database**](https://www.cockroachlabs.com/docs/v22.1/ui-databases-page) and Database table pages in the DB Console. -- Added the timestamp of the last statistics collection to the **Database** details and **Database** table pages in the DB Console. [#76168][#76168] -- Open SQL Transactions and Active SQL Transactions are now downsampled using `MAX` instead of `AVG` and will more accurately reflect narrow spikes in transaction counts when looking at downsampled data. [#76348][#76348] -- Display circuit breakers in problems ranges and range status. [#75809][#75809] -- A **Now** button was added to the **Statements** and **Transactions** pages. The **Reset time** link was replaced by the **Now** button. [#76691][#76691] -- Changed `invalid lease` to `expired lease` on the Problem Ranges section of the **Advanced Debug** page [#76757][#76757] -- Added column selector, filters, and new columns to the **Sessions** and **Sessions Details** pages. [#75965][#75965] -- Added long loading messages to the [**SQL Activity**](https://www.cockroachlabs.com/docs/v22.1/ui-sql-dashboard) pages. [#76739][#76739] - -

Bug fixes

- -- Fixed possible panics in some distributed queries using `ENUM`s in [join predicates](https://www.cockroachlabs.com/docs/v22.1/joins). [#74659][#74659] -- Fixed a bug that could previously cause redundant lease transfers. [#74726][#74726] -- Fixed a bug where deleting data in schema changes (for example, when dropping an index or table) could fail with a `command too large` error. [#74674][#74674] -- Fixed a bug where CockroachDB could encounter an internal error when performing [`UPSERT`](https://www.cockroachlabs.com/docs/v22.1/upsert) or [`INSERT ... ON CONFLICT`](https://www.cockroachlabs.com/docs/v22.1/insert#on-conflict-clause) queries in some cases when the new rows contained `NULL` values (either `NULL`s explicitly specified or `NULL`s used since some columns were omitted). [#74825][#74825] -- Fixed a bug where the scale of a [`DECIMAL`](https://www.cockroachlabs.com/docs/v22.1/decimal) column was not enforced when values specified in scientific notation (for example, `6e3`) were inserted into the column. [#74869][#74869] -- Fixed a bug where certain malformed [backup schedule expressions](https://www.cockroachlabs.com/docs/v22.1/manage-a-backup-schedule) caused the node to crash. [#74881][#74881] -- Fixed a bug where a [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore) job could hang if it encountered an error when ingesting restored data. [#74905][#74905] -- Fixed a bug which caused errors in rare cases when trying to divide `INTERVAL` values by `INT4` or `INT2` values. [#74882][#74882] -- Fixed a bug that could occur when a [`TIMETZ`](https://www.cockroachlabs.com/docs/v22.1/time) column was indexed, and a query predicate constrained that column using a `<` or `>` operator with a `TIMETZ` constant. If the column contained values with time zones that did not match the time zone of the `TIMETZ` constant, it was possible that not all matching values could be returned by the query. Specifically, the results may not have included values within one microsecond of the predicate's absolute time. This bug was introduced when the `TIMETZ` datatype was first added in v20.1. It exists in all versions of v20.1, v20.2, v21.1, and v21.2 prior to this patch. [#74914][#74914] -- Fixed an internal error, `estimated row count must be non-zero`, that could occur during planning for queries over a table with a `TIMETZ` column. This error was due to a faulty assumption in the statistics estimation code about ordering of `TIMETZ` values, which has now been fixed. The error could occur when `TIMETZ` values used in the query had a different time zone offset than the `TIMETZ` values stored in the table. [#74914][#74914] -- The `--user` argument is no longer ignored when using [`cockroach sql`](https://www.cockroachlabs.com/docs/v22.1/cockroach-sql) in `--insecure` mode. [#75194][#75194] -- Fixed a bug where CockroachDB could incorrectly report the `KV bytes read` statistic in [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v22.1/explain-analyze) output. The bug is present only in v21.2 versions. [#75175][#75175] -- Fixed a bug that caused internal errors in queries with set operations, like [`UNION`](https://www.cockroachlabs.com/docs/v22.1/selection-queries#union-combine-two-queries), when corresponding columns on either side of the set operation were not the same. This error only occurred with a limited set of types. This bug is present in v20.2.6+, v21.1.0+, and v21.2.0+. [#75219][#75219] -- Fixed a bug where [`CREATE INDEX`](https://www.cockroachlabs.com/docs/v22.1/create-index) statements using expressions failed in some cases if they encountered an internal retry. [#75056][#75056] -- Fixed a bug when creating [hash-sharded indexes](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) on existing tables, traffic would hit primarily on the single range of the index before it was split into more ranges for shards as the range size grew. This change makes schema changes able to presplit ranges on shard boundaries before the index becomes writable. Added the `sql.hash_sharded_range_pre_split.max` cluster setting which allows users to set the upper boundary of the amount of ranges. If the bucket count of the defined index is less than the cluster setting, the bucket count will be the amount of pre-split ranges. [#74923][#74923] -- Updated the `String()` function of `roleOption` to add a space on the role `VALID UNTIL`. [#75271][#75271] -- Fixed a bug where **SQL Activity** pages crashed when a column was sorted the 3rd time. [#75473][#75473] -- Fixed a bug where if multiple columns were added to a table inside a transaction, then none of the columns would be backfilled if the last column did not require a backfill. [#75076][#75076] -- Fixed a bug where in some cases queries that involved a scan which returned many results and which included lookups for individual keys were not returning all results from the table. [#75475][#75475] -- Fixed a bug where dropping and creating a [primary index](https://www.cockroachlabs.com/docs/v22.1/primary-key) constraint with the same name in a transaction would incorrectly fail. [#75155][#75155] -- `crdb_internal.deserialize_session` now checks if the `session_user` has the privilege to `SET ROLE` to the `current_user` before changing the session settings. [#75575][#75575] -- [Dedicated clusters](https://www.cockroachlabs.com/docs/cockroachcloud/create-your-cluster) can now restore tables and databases from backups made by tenants. [#73647][#73647] -- Fixed a bug that caused high SQL tail latencies during background [rebalancing](https://www.cockroachlabs.com/docs/v22.1/architecture/replication-layer) in the cluster. [#73697][#73697] -- Fixed a bug when tables or columns were dropped that contained [sequences](https://www.cockroachlabs.com/docs/v22.1/create-sequence), where the sequence remained even when the owner table or column did not exist. A sequence is created when a column is defined as a `SERIAL` type and the `serial_normalization` session variable is set to `sql_sequence`. In this case, the sequence is owned by the column and the table where the column exists. The sequence should be dropped when the owner table or column is dropped, which is the PostgreSQL behavior. CockroachDB now assigns correct ownership information to the sequence descriptor and column descriptor so that CockroachDB aligns with PostgreSQL. [#74840][#74840] -- Fixed a bug where the `options` query parameter was removed when using the `\c` command in the [SQL shell](https://www.cockroachlabs.com/docs/v22.1/cockroach-sql) to reconnect to the cluster. [#75673][#75673] -- [`cockroach node decommission`](https://www.cockroachlabs.com/docs/v22.1/cockroach-node) no longer causes query failure due to the decommissioning node not closing open SQL connections and still being marked as ready. The decommissioning process now includes a draining step that fixes this. In other words, a decommission now automatically drains a node. This also means that running a drain after a decommission is no longer necessary. It is optional, but recommended, that `cockroach node drain` is used before `cockroach node decommission` to avoid the possibility of a disturbance in query performance. [#74319][#74319] -- The `CancelSession` endpoint now correctly propagates gateway metadata when forwarding requests. [#75814][#75814] -- Fixed a bug which could cause nodes to crash when truncating abnormally large Raft logs. [#75793][#75793] -- Fixed a bug that caused incorrect values to be written to [computed columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns) when their expressions were of the form `j->x = y`, where `j` is a [`JSON`](https://www.cockroachlabs.com/docs/v22.1/jsonb) column and `x` and `y` are constants. This bug also caused corruption of [partial indexes](https://www.cockroachlabs.com/docs/v22.1/partial-indexes) with `WHERE` clauses containing expressions of the same form. This bug was present since version v2.0. [#75914][#75914] -- [Changefeeds](https://www.cockroachlabs.com/docs/v22.1/changefeed-sinks) retry instead of fail on RPC send failure. [#75517][#75517] -- Fixed a rare race condition that could lead to client-visible errors like `found ABORTED record for implicitly committed transaction`. These errors were harmless in that they did not indicate data corruption, but they could be disruptive to clients. [#75601][#75601] -- Fixed a bug where swapping primary keys could lead to scenarios where [foreign key references](https://www.cockroachlabs.com/docs/v22.1/foreign-key) could lose their uniqueness. [#75820][#75820] -- Fixed a bug where [`CASE` expressions](https://www.cockroachlabs.com/docs/v22.1/scalar-expressions#conditional-expressions) with branches that result in types that cannot be cast to a common type caused internal errors. They now result in a user-facing error. [#76193][#76193] -- Fixed a bug that caused internal errors when querying tables with [virtual columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns) in the primary key. This bug was only present since version v22.1.0-alpha.1 and does not appear in any production releases. [#75898][#75898] -- The DB console [**Databases**](https://www.cockroachlabs.com/docs/v22.1/ui-databases-page) page now shows stable, consistent values for database sizes. [#76315][#76315] -- Fixed a bug where comments were not cleaned up when the table primary keys were swapped, which could cause [`SHOW TABLE`](https://www.cockroachlabs.com/docs/v22.1/show-tables) to fail. [#76277][#76277] -- Fixed a bug where some of the [`cockroach node`](https://www.cockroachlabs.com/docs/v22.1/cockroach-node) subcommands did not handle `--timeout` properly. [#76427][#76427] -- Fixed a bug which caused the [optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) to omit join filters in rare cases when reordering joins, which could result in incorrect query results. This bug was present since v20.2. [#76334][#76334] -- Fixed a bug where the list of recently decommissioned nodes and the historical list of decommissioned nodes incorrectly display decommissioned nodes. [#76538][#76538] -- Fixed a bug where CockroachDB could incorrectly not return a row from a table with multiple column families when that row contains a `NULL` value when a composite type ([`FLOAT`](https://www.cockroachlabs.com/docs/v22.1/float), [`DECIMAL`](https://www.cockroachlabs.com/docs/v22.1/decimal), [`COLLATED STRING`](https://www.cockroachlabs.com/docs/v22.1/collate), or an array of these types) is included in the `PRIMARY KEY`. [#76563][#76563] -- There is now a 1 hour timeout when sending [Raft](https://www.cockroachlabs.com/docs/v22.1/architecture/replication-layer#raft) snapshots, to avoid stalled snapshot transfers preventing Raft log truncation and growing the Raft log very large. This is configurable via the `COCKROACH_RAFT_SEND_SNAPSHOT_TIMEOUT` environment variable. [#76589][#76589] -- Fixed an error that could sometimes occur when sorting the output of the [`SHOW CREATE ALL TABLES`](https://www.cockroachlabs.com/docs/v22.1/show-create) statement. [#76639][#76639] -- Fixed a bug where [backups](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups) incorrectly backed up database, schema, and type descriptors that were in a `DROP` state at the time the backup was run. This bug resulted in the user being unable to backup and restore if their cluster had dropped and public descriptors with colliding names. [#76635][#76635] -- Fixed a race condition that in rare circumstances could cause a node to panic with `unexpected Stopped processor` during shutdown. [#76825][#76825] -- Fixed a bug where the different stages of preparing, binding, and executing a prepared statement would use different implicit transactions. Now these stages all share the same implicit transaction. [#76792][#76792] -- Attempting to run concurrent profiles now works up to a concurrency limit of two. This will remove the occurrence of `profile id not found` errors while running up to two profiles concurrently. When a profile is not found, the error message has been updated to suggest remediation steps in order to unblock the user. [#76266][#76266] -- The content type header for the HTTP log sink is now set to `application/json` if the format of the log output is `JSON`. [#77014][#77014] -- Fixed a bug that could corrupt indexes containing [virtual columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns) or [expressions](https://www.cockroachlabs.com/docs/v22.1/expression-indexes). The bug only occurred when the index's table had a foreign key reference to another table with an `ON DELETE CASCADE` action, and a row was deleted in the referenced table. This bug was present since virtual columns were added in version v21.1.0. [#77052][#77052] -- Fixed a bug where CockroachDB could crash when running a `SQL PREPARE` using the PostgreSQL extended protocol. [#77063][#77063] -- Fixed a bug where running SQL-level `EXECUTE` using the PostgreSQL extended protocol had inconsistent behavior and could in some cases crash the server. [#77063][#77063] -- The `crdb_internal.node_inflight_trace_spans` virtual table will now present traces for all operations ongoing on the respective node. Previously, the table would reflect a small percentage of ongoing operations unless tracing was explicitly enabled. [#76403][#76403] -- The default value of `kv.rangefeed_concurrent_catchup_iterators` was lowered to 16 to help avoid overload during `CHANGEFEED` restarts. [#75851][#75851] - -

Performance improvements

- -- The memory representation of [`DECIMAL`](https://www.cockroachlabs.com/docs/v22.1/decimal) datums has been optimized to save space, avoid heap allocations, and eliminate indirection. This increases the speed of `DECIMAL` arithmetic and aggregation by up to 20% on large data sets. [#74590][#74590] -- `RESTORE` operations in [Serverless clusters](https://www.cockroachlabs.com/docs/cockroachcloud/create-a-serverless-cluster) now explicitly ask the host cluster to distribute data more evenly. [#75105][#75105] -- `IMPORT`, `CREATE`, `INDEX`, and other [bulk ingestion jobs](https://www.cockroachlabs.com/docs/cockroachcloud/take-and-restore-self-managed-backups) run on Serverless clusters now collaborate with the host cluster to spread ingested data more during ingest. [#75105][#75105] -- The `covar_pop` [aggregate function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#aggregate-functions) is now evaluated more efficiently in a distributed setting. [#73062][#73062] -- Queries using [`NOT expr`](https://www.cockroachlabs.com/docs/v22.1/scalar-expressions) syntax can now be evaluated faster in some cases. [#75058][#75058] -- The `regr_sxx`, `regr_sxy`, `regr_syy` aggregate functions are now evaluated more efficiently in a distributed setting. [#75619][#75619] -- Transaction read refresh operations performed during optimistic concurrency control's validation phase now use a time-bound file filter when scanning the LSM tree. This allows these operations to avoid scanning files that contain no keys written since the transaction originally performed its reads. [#74628][#74628] -- A set of bugs that rendered Queries-Per-Second (QPS) based lease and replica rebalancing in v21.2 and earlier ineffective under heterogenously loaded cluster localities has been fixed. Additionally a limitation which prevent CockroachDB from effectively alleviating extreme QPS hotspots from nodes has also been fixed. [#72296][#72296] -- The [optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) better optimizes queries that include both foreign key joins and self-joins. [#75582][#75582] -- A `LIMIT` can now be pushed below a foreign key join or self-join in more cases, which may result in more efficient query plans. [#75582][#75582] -- The performance of many `DECIMAL` arithmetic operators has been improved by as much as 60%. These operators include division (`/`), `sqrt`, `cbrt`, `exp`, `ln`, `log`, and `pow`. [#75770][#75770] -- Stores will retry requests that are directed at the incorrect range, most commonly following a recent range split. This patch has the effect of reducing tail latency following range splits. [#75446][#75446] -- The optimizer can now generate lookup joins in certain cases for non-covering indexes, when performing a left outer/semi/anti join. [#58261][#58261] -- The optimizer now plans inner lookup joins using expression indexes in more cases, resulting in more efficient query plans. [#76078][#76078] -- Certain forms of automatically retried `read uncertainty` errors are now retried more efficiently, avoiding a network round trip. [#75905][#75905] -- The `regr_avgx`, `regr_avgy`, `regr_intercept`, `regr_r2`, and `regr_slope` aggregate functions are now evaluated more efficiently in a distributed setting. [#76007][#76007] -- `IMPORT`s and index backfills should now do a better job of spreading their load out over the nodes in the cluster. [#75894][#75894] -- Fixed a bug in the histogram estimation code that could cause the optimizer to think a scan of a multi-column index would produce 0 rows, when in fact it would produce many rows. This could cause the optimizer to choose a suboptimal plan. It is now less likely for the optimizer to choose a suboptimal plan when multiple multi-column indexes are available. [#76486][#76486] -- Added the `kv.replica_stats.addsst_request_size_factor` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). This setting is used to tune Queries-Per-Second (QPS) sensitivity to large imports. By default, this setting is disabled. When enabled, the size of any `AddSSTableRequest` will contribute to QPS in inverse relation to this settings magnitude. By default this setting is configured to a conservative 50,000; every 50 kilobytes will be accounted for as an additional 1 QPS. [#76252][#76252] -- Queries with a [`LIMIT` clause](https://www.cockroachlabs.com/docs/v22.1/limit-offset) applied against a single table, either explicitly written, or implicit such as in an uncorrelated `EXISTS` subquery, now scan that table with improved latency if the table is defined with `LOCALITY REGIONAL BY ROW` and the number of qualified rows residing in the local region is less than or equal to the hard limit (sum of the `LIMIT` clause and optional `OFFSET` clause values). This optimization is only applied if the hard limit is 100000 or less. [#75431][#75431] -- Fixed a limitation where upon adding a new node to the cluster, lease counts among existing nodes could diverge until the new node was fully up-replicated. [#74077][#74077] -- The optimizer now attempts to plan lookup joins on indexes that include computed columns in more cases, which may improve query plans. [#76817][#76817] -- The optimizer produces more efficient query plans for `INSERT .. ON CONFLICT` statements that do not have explicit conflict columns or constraints and are performed on partitioned tables. [#76961][#76961] -- The `corr`, `covar_samp`, `sqrdiff`, and `regr_count` aggregate functions are now evaluated more efficiently in a distributed setting [#76754][#76754] -- The jobs scheduler now runs on a single node by default in order to reduce contention on the scheduled jobs table. [#73319][#73319] - -

Build changes

- -- Upgrade to Go 1.17.6 [#74655][#74655] - -
- -

Contributors

- -This release includes 866 merged PRs by 89 authors. -We would like to thank the following contributors from the CockroachDB community: - -- Max Neverov -- RajivTS (first-time contributor) -- Ulf Adams -- e-mbrown -- llllash (first-time contributor) -- shralex - -
- -[#58261]: https://github.com/cockroachdb/cockroach/pull/58261 -[#67501]: https://github.com/cockroachdb/cockroach/pull/67501 -[#68488]: https://github.com/cockroachdb/cockroach/pull/68488 -[#71542]: https://github.com/cockroachdb/cockroach/pull/71542 -[#71882]: https://github.com/cockroachdb/cockroach/pull/71882 -[#72296]: https://github.com/cockroachdb/cockroach/pull/72296 -[#72659]: https://github.com/cockroachdb/cockroach/pull/72659 -[#72665]: https://github.com/cockroachdb/cockroach/pull/72665 -[#72670]: https://github.com/cockroachdb/cockroach/pull/72670 -[#72925]: https://github.com/cockroachdb/cockroach/pull/72925 -[#73062]: https://github.com/cockroachdb/cockroach/pull/73062 -[#73319]: https://github.com/cockroachdb/cockroach/pull/73319 -[#73460]: https://github.com/cockroachdb/cockroach/pull/73460 -[#73647]: https://github.com/cockroachdb/cockroach/pull/73647 -[#73697]: https://github.com/cockroachdb/cockroach/pull/73697 -[#73886]: https://github.com/cockroachdb/cockroach/pull/73886 -[#73995]: https://github.com/cockroachdb/cockroach/pull/73995 -[#74006]: https://github.com/cockroachdb/cockroach/pull/74006 -[#74077]: https://github.com/cockroachdb/cockroach/pull/74077 -[#74082]: https://github.com/cockroachdb/cockroach/pull/74082 -[#74115]: https://github.com/cockroachdb/cockroach/pull/74115 -[#74179]: https://github.com/cockroachdb/cockroach/pull/74179 -[#74301]: https://github.com/cockroachdb/cockroach/pull/74301 -[#74319]: https://github.com/cockroachdb/cockroach/pull/74319 -[#74551]: https://github.com/cockroachdb/cockroach/pull/74551 -[#74560]: https://github.com/cockroachdb/cockroach/pull/74560 -[#74590]: https://github.com/cockroachdb/cockroach/pull/74590 -[#74628]: https://github.com/cockroachdb/cockroach/pull/74628 -[#74645]: https://github.com/cockroachdb/cockroach/pull/74645 -[#74655]: https://github.com/cockroachdb/cockroach/pull/74655 -[#74659]: https://github.com/cockroachdb/cockroach/pull/74659 -[#74661]: https://github.com/cockroachdb/cockroach/pull/74661 -[#74662]: https://github.com/cockroachdb/cockroach/pull/74662 -[#74664]: https://github.com/cockroachdb/cockroach/pull/74664 -[#74674]: https://github.com/cockroachdb/cockroach/pull/74674 -[#74706]: https://github.com/cockroachdb/cockroach/pull/74706 -[#74715]: https://github.com/cockroachdb/cockroach/pull/74715 -[#74726]: https://github.com/cockroachdb/cockroach/pull/74726 -[#74761]: https://github.com/cockroachdb/cockroach/pull/74761 -[#74774]: https://github.com/cockroachdb/cockroach/pull/74774 -[#74821]: https://github.com/cockroachdb/cockroach/pull/74821 -[#74825]: https://github.com/cockroachdb/cockroach/pull/74825 -[#74828]: https://github.com/cockroachdb/cockroach/pull/74828 -[#74831]: https://github.com/cockroachdb/cockroach/pull/74831 -[#74840]: https://github.com/cockroachdb/cockroach/pull/74840 -[#74863]: https://github.com/cockroachdb/cockroach/pull/74863 -[#74867]: https://github.com/cockroachdb/cockroach/pull/74867 -[#74869]: https://github.com/cockroachdb/cockroach/pull/74869 -[#74881]: https://github.com/cockroachdb/cockroach/pull/74881 -[#74882]: https://github.com/cockroachdb/cockroach/pull/74882 -[#74905]: https://github.com/cockroachdb/cockroach/pull/74905 -[#74914]: https://github.com/cockroachdb/cockroach/pull/74914 -[#74916]: https://github.com/cockroachdb/cockroach/pull/74916 -[#74920]: https://github.com/cockroachdb/cockroach/pull/74920 -[#74923]: https://github.com/cockroachdb/cockroach/pull/74923 -[#74929]: https://github.com/cockroachdb/cockroach/pull/74929 -[#75056]: https://github.com/cockroachdb/cockroach/pull/75056 -[#75058]: https://github.com/cockroachdb/cockroach/pull/75058 -[#75076]: https://github.com/cockroachdb/cockroach/pull/75076 -[#75097]: https://github.com/cockroachdb/cockroach/pull/75097 -[#75105]: https://github.com/cockroachdb/cockroach/pull/75105 -[#75114]: https://github.com/cockroachdb/cockroach/pull/75114 -[#75155]: https://github.com/cockroachdb/cockroach/pull/75155 -[#75174]: https://github.com/cockroachdb/cockroach/pull/75174 -[#75175]: https://github.com/cockroachdb/cockroach/pull/75175 -[#75194]: https://github.com/cockroachdb/cockroach/pull/75194 -[#75219]: https://github.com/cockroachdb/cockroach/pull/75219 -[#75223]: https://github.com/cockroachdb/cockroach/pull/75223 -[#75226]: https://github.com/cockroachdb/cockroach/pull/75226 -[#75231]: https://github.com/cockroachdb/cockroach/pull/75231 -[#75239]: https://github.com/cockroachdb/cockroach/pull/75239 -[#75262]: https://github.com/cockroachdb/cockroach/pull/75262 -[#75271]: https://github.com/cockroachdb/cockroach/pull/75271 -[#75274]: https://github.com/cockroachdb/cockroach/pull/75274 -[#75429]: https://github.com/cockroachdb/cockroach/pull/75429 -[#75431]: https://github.com/cockroachdb/cockroach/pull/75431 -[#75443]: https://github.com/cockroachdb/cockroach/pull/75443 -[#75446]: https://github.com/cockroachdb/cockroach/pull/75446 -[#75451]: https://github.com/cockroachdb/cockroach/pull/75451 -[#75458]: https://github.com/cockroachdb/cockroach/pull/75458 -[#75461]: https://github.com/cockroachdb/cockroach/pull/75461 -[#75470]: https://github.com/cockroachdb/cockroach/pull/75470 -[#75473]: https://github.com/cockroachdb/cockroach/pull/75473 -[#75475]: https://github.com/cockroachdb/cockroach/pull/75475 -[#75490]: https://github.com/cockroachdb/cockroach/pull/75490 -[#75517]: https://github.com/cockroachdb/cockroach/pull/75517 -[#75548]: https://github.com/cockroachdb/cockroach/pull/75548 -[#75551]: https://github.com/cockroachdb/cockroach/pull/75551 -[#75556]: https://github.com/cockroachdb/cockroach/pull/75556 -[#75562]: https://github.com/cockroachdb/cockroach/pull/75562 -[#75572]: https://github.com/cockroachdb/cockroach/pull/75572 -[#75575]: https://github.com/cockroachdb/cockroach/pull/75575 -[#75582]: https://github.com/cockroachdb/cockroach/pull/75582 -[#75588]: https://github.com/cockroachdb/cockroach/pull/75588 -[#75597]: https://github.com/cockroachdb/cockroach/pull/75597 -[#75601]: https://github.com/cockroachdb/cockroach/pull/75601 -[#75613]: https://github.com/cockroachdb/cockroach/pull/75613 -[#75619]: https://github.com/cockroachdb/cockroach/pull/75619 -[#75624]: https://github.com/cockroachdb/cockroach/pull/75624 -[#75628]: https://github.com/cockroachdb/cockroach/pull/75628 -[#75660]: https://github.com/cockroachdb/cockroach/pull/75660 -[#75673]: https://github.com/cockroachdb/cockroach/pull/75673 -[#75678]: https://github.com/cockroachdb/cockroach/pull/75678 -[#75710]: https://github.com/cockroachdb/cockroach/pull/75710 -[#75722]: https://github.com/cockroachdb/cockroach/pull/75722 -[#75727]: https://github.com/cockroachdb/cockroach/pull/75727 -[#75733]: https://github.com/cockroachdb/cockroach/pull/75733 -[#75737]: https://github.com/cockroachdb/cockroach/pull/75737 -[#75750]: https://github.com/cockroachdb/cockroach/pull/75750 -[#75753]: https://github.com/cockroachdb/cockroach/pull/75753 -[#75762]: https://github.com/cockroachdb/cockroach/pull/75762 -[#75770]: https://github.com/cockroachdb/cockroach/pull/75770 -[#75793]: https://github.com/cockroachdb/cockroach/pull/75793 -[#75804]: https://github.com/cockroachdb/cockroach/pull/75804 -[#75809]: https://github.com/cockroachdb/cockroach/pull/75809 -[#75814]: https://github.com/cockroachdb/cockroach/pull/75814 -[#75815]: https://github.com/cockroachdb/cockroach/pull/75815 -[#75816]: https://github.com/cockroachdb/cockroach/pull/75816 -[#75820]: https://github.com/cockroachdb/cockroach/pull/75820 -[#75822]: https://github.com/cockroachdb/cockroach/pull/75822 -[#75843]: https://github.com/cockroachdb/cockroach/pull/75843 -[#75851]: https://github.com/cockroachdb/cockroach/pull/75851 -[#75852]: https://github.com/cockroachdb/cockroach/pull/75852 -[#75854]: https://github.com/cockroachdb/cockroach/pull/75854 -[#75880]: https://github.com/cockroachdb/cockroach/pull/75880 -[#75890]: https://github.com/cockroachdb/cockroach/pull/75890 -[#75894]: https://github.com/cockroachdb/cockroach/pull/75894 -[#75898]: https://github.com/cockroachdb/cockroach/pull/75898 -[#75900]: https://github.com/cockroachdb/cockroach/pull/75900 -[#75905]: https://github.com/cockroachdb/cockroach/pull/75905 -[#75914]: https://github.com/cockroachdb/cockroach/pull/75914 -[#75957]: https://github.com/cockroachdb/cockroach/pull/75957 -[#75965]: https://github.com/cockroachdb/cockroach/pull/75965 -[#75971]: https://github.com/cockroachdb/cockroach/pull/75971 -[#75988]: https://github.com/cockroachdb/cockroach/pull/75988 -[#75990]: https://github.com/cockroachdb/cockroach/pull/75990 -[#76002]: https://github.com/cockroachdb/cockroach/pull/76002 -[#76007]: https://github.com/cockroachdb/cockroach/pull/76007 -[#76012]: https://github.com/cockroachdb/cockroach/pull/76012 -[#76068]: https://github.com/cockroachdb/cockroach/pull/76068 -[#76078]: https://github.com/cockroachdb/cockroach/pull/76078 -[#76095]: https://github.com/cockroachdb/cockroach/pull/76095 -[#76112]: https://github.com/cockroachdb/cockroach/pull/76112 -[#76115]: https://github.com/cockroachdb/cockroach/pull/76115 -[#76129]: https://github.com/cockroachdb/cockroach/pull/76129 -[#76166]: https://github.com/cockroachdb/cockroach/pull/76166 -[#76168]: https://github.com/cockroachdb/cockroach/pull/76168 -[#76193]: https://github.com/cockroachdb/cockroach/pull/76193 -[#76209]: https://github.com/cockroachdb/cockroach/pull/76209 -[#76213]: https://github.com/cockroachdb/cockroach/pull/76213 -[#76215]: https://github.com/cockroachdb/cockroach/pull/76215 -[#76252]: https://github.com/cockroachdb/cockroach/pull/76252 -[#76254]: https://github.com/cockroachdb/cockroach/pull/76254 -[#76265]: https://github.com/cockroachdb/cockroach/pull/76265 -[#76266]: https://github.com/cockroachdb/cockroach/pull/76266 -[#76271]: https://github.com/cockroachdb/cockroach/pull/76271 -[#76277]: https://github.com/cockroachdb/cockroach/pull/76277 -[#76285]: https://github.com/cockroachdb/cockroach/pull/76285 -[#76301]: https://github.com/cockroachdb/cockroach/pull/76301 -[#76315]: https://github.com/cockroachdb/cockroach/pull/76315 -[#76334]: https://github.com/cockroachdb/cockroach/pull/76334 -[#76346]: https://github.com/cockroachdb/cockroach/pull/76346 -[#76348]: https://github.com/cockroachdb/cockroach/pull/76348 -[#76358]: https://github.com/cockroachdb/cockroach/pull/76358 -[#76397]: https://github.com/cockroachdb/cockroach/pull/76397 -[#76399]: https://github.com/cockroachdb/cockroach/pull/76399 -[#76401]: https://github.com/cockroachdb/cockroach/pull/76401 -[#76403]: https://github.com/cockroachdb/cockroach/pull/76403 -[#76410]: https://github.com/cockroachdb/cockroach/pull/76410 -[#76416]: https://github.com/cockroachdb/cockroach/pull/76416 -[#76427]: https://github.com/cockroachdb/cockroach/pull/76427 -[#76437]: https://github.com/cockroachdb/cockroach/pull/76437 -[#76457]: https://github.com/cockroachdb/cockroach/pull/76457 -[#76458]: https://github.com/cockroachdb/cockroach/pull/76458 -[#76486]: https://github.com/cockroachdb/cockroach/pull/76486 -[#76512]: https://github.com/cockroachdb/cockroach/pull/76512 -[#76518]: https://github.com/cockroachdb/cockroach/pull/76518 -[#76523]: https://github.com/cockroachdb/cockroach/pull/76523 -[#76538]: https://github.com/cockroachdb/cockroach/pull/76538 -[#76563]: https://github.com/cockroachdb/cockroach/pull/76563 -[#76566]: https://github.com/cockroachdb/cockroach/pull/76566 -[#76583]: https://github.com/cockroachdb/cockroach/pull/76583 -[#76589]: https://github.com/cockroachdb/cockroach/pull/76589 -[#76605]: https://github.com/cockroachdb/cockroach/pull/76605 -[#76635]: https://github.com/cockroachdb/cockroach/pull/76635 -[#76639]: https://github.com/cockroachdb/cockroach/pull/76639 -[#76670]: https://github.com/cockroachdb/cockroach/pull/76670 -[#76675]: https://github.com/cockroachdb/cockroach/pull/76675 -[#76676]: https://github.com/cockroachdb/cockroach/pull/76676 -[#76691]: https://github.com/cockroachdb/cockroach/pull/76691 -[#76719]: https://github.com/cockroachdb/cockroach/pull/76719 -[#76739]: https://github.com/cockroachdb/cockroach/pull/76739 -[#76754]: https://github.com/cockroachdb/cockroach/pull/76754 -[#76757]: https://github.com/cockroachdb/cockroach/pull/76757 -[#76789]: https://github.com/cockroachdb/cockroach/pull/76789 -[#76792]: https://github.com/cockroachdb/cockroach/pull/76792 -[#76817]: https://github.com/cockroachdb/cockroach/pull/76817 -[#76825]: https://github.com/cockroachdb/cockroach/pull/76825 -[#76853]: https://github.com/cockroachdb/cockroach/pull/76853 -[#76863]: https://github.com/cockroachdb/cockroach/pull/76863 -[#76888]: https://github.com/cockroachdb/cockroach/pull/76888 -[#76907]: https://github.com/cockroachdb/cockroach/pull/76907 -[#76918]: https://github.com/cockroachdb/cockroach/pull/76918 -[#76937]: https://github.com/cockroachdb/cockroach/pull/76937 -[#76951]: https://github.com/cockroachdb/cockroach/pull/76951 -[#76961]: https://github.com/cockroachdb/cockroach/pull/76961 -[#76990]: https://github.com/cockroachdb/cockroach/pull/76990 -[#77004]: https://github.com/cockroachdb/cockroach/pull/77004 -[#77014]: https://github.com/cockroachdb/cockroach/pull/77014 -[#77016]: https://github.com/cockroachdb/cockroach/pull/77016 -[#77023]: https://github.com/cockroachdb/cockroach/pull/77023 -[#77043]: https://github.com/cockroachdb/cockroach/pull/77043 -[#77052]: https://github.com/cockroachdb/cockroach/pull/77052 -[#77063]: https://github.com/cockroachdb/cockroach/pull/77063 -[01cb84707]: https://github.com/cockroachdb/cockroach/commit/01cb84707 -[2331ac119]: https://github.com/cockroachdb/cockroach/commit/2331ac119 -[313d08532]: https://github.com/cockroachdb/cockroach/commit/313d08532 -[38cba0df0]: https://github.com/cockroachdb/cockroach/commit/38cba0df0 -[4bb7aef76]: https://github.com/cockroachdb/cockroach/commit/4bb7aef76 -[4d041e27c]: https://github.com/cockroachdb/cockroach/commit/4d041e27c -[6ea73f4e1]: https://github.com/cockroachdb/cockroach/commit/6ea73f4e1 -[74e2070c0]: https://github.com/cockroachdb/cockroach/commit/74e2070c0 -[9cffb82d3]: https://github.com/cockroachdb/cockroach/commit/9cffb82d3 -[c048446c9]: https://github.com/cockroachdb/cockroach/commit/c048446c9 -[c517f764f]: https://github.com/cockroachdb/cockroach/commit/c517f764f -[d679b6de0]: https://github.com/cockroachdb/cockroach/commit/d679b6de0 -[f9d6bea00]: https://github.com/cockroachdb/cockroach/commit/f9d6bea00 diff --git a/src/current/_includes/releases/v22.1/v22.1.0-alpha.3.md b/src/current/_includes/releases/v22.1/v22.1.0-alpha.3.md deleted file mode 100644 index 177e0f1d516..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.0-alpha.3.md +++ /dev/null @@ -1,62 +0,0 @@ -## v22.1.0-alpha.3 - -Release Date: March 14, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Enterprise edition changes

- -- Altering the sink type of a [changefeed](https://www.cockroachlabs.com/docs/v22.1/changefeed-sinks) is now disallowed. An attempt to change a sink type now returns an error message recommending that you create a new sink type. [#77152][#77152] -- Currently executing [schedules](https://www.cockroachlabs.com/docs/v22.1/manage-a-backup-schedule) are cancelled immediately when the jobs scheduler is disabled. [#77306][#77306] -- The `changefeed.backfill_pending_ranges` [Prometheus metric](https://www.cockroachlabs.com/docs/v22.1/monitoring-and-alerting#prometheus-endpoint) was added to track ongoing backfill progress of a changefeed. [#76995][#76995] -- Changefeeds can now be created on tables with more than one [column family](https://www.cockroachlabs.com/docs/v22.1/column-families). Previously, this would error. Now, we create a feed that will emit individual messages per column family. Primary key columns will appear in the key for all column families, but in the value only in the families they are in. For example, if a table foo has families `primary` containing the primary key and a string column, and `secondary` containing a different string column, you'll see two messages for an insert that will look like `0 -> {id: 0, s1: "val1"}, 0 -> {s2: "val2"}`. If an update then only affects one family, you'll see only one message (e.g., `0 -> {s2: "newval"})`. This behavior reflects CockroachDB internal treatment of column families: writes are processed and stored separately, with only the ordering and atomicity guarantees that would apply to updates to two different tables within a single transaction. Avro schema names will include the family name concatenated to the table name. If you don't specify family names in the `CREATE` or `ALTER TABLE` statement, the default family names will either be `primary` or of the form `fam__`. [#77084][#77084] - -

SQL language changes

- -- Introduced the `crdb_internal.transaction_contention_events` virtual table, that exposes historical transaction contention events. The events exposed in the new virtual table also include transaction fingerprint IDs for both blocking and waiting transactions. This allows the new virtual table to be joined into statement statistics and transaction statistics tables. The new virtual table requires either the `VIEWACTIVITYREDACTED` or `VIEWACTIVITY` [role option](https://www.cockroachlabs.com/docs/v22.1/alter-role#role-options) to access. However, if the user has the `VIEWACTIVTYREDACTED` role, the contending key will be redacted. The contention events are stored in memory. The number of contention events stored is controlled via `sql.contention.event_store.capacity` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). [#76917][#76917] -- Initial implementation of a scheduled logger used to capture index usage statistics to the [telemetry logging channel](https://www.cockroachlabs.com/docs/v22.1/logging#telemetry). [#76886][#76886] -- Added the ability for the TTL job to generate statistics on number of rows and number of expired rows on the table. This is off by default, controllable by the `ttl_row_stats_poll_interval` [storage parameter](https://www.cockroachlabs.com/docs/v22.1/sql-grammar#opt_with_storage_parameter_list). [#76837][#76837] -- Return ambiguous [unary operator error](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#operators) for ambiguous input like `~'1'` which can be interpreted as an integer (resulting in `-2`) or a bit string (resulting in `0`). [#76943][#76943] -- [`crdb_internal.default_privileges`](https://www.cockroachlabs.com/docs/v22.1/crdb-internal) no longer incorrectly shows default privileges for databases where the default privilege was not actually defined. [#77255][#77255] -- You can now create core [changefeeds](https://www.cockroachlabs.com/docs/v22.1/changefeeds-on-tables-with-column-families) on tables with more than one [column family](https://www.cockroachlabs.com/docs/v22.1/column-families). CockroachDB creates a feed that will emit individual messages per column family. Primary key columns will appear in the key for all column families, but in the value only in the families they are in. For example, if a table `foo` has families `primary` containing the primary key and a string column, and `secondary` containing a different string column, you'll see two messages for an insert that will look like `0 -> {id: 0, s1: "val1"}, 0 -> {s2: "val2"}`. If an update then only affects one family, you'll see only one message (e.g., 0 -> `{s2: "newval"}`). This behavior reflects CockroachDB internal treatment of column families: writes are processed and stored separately, with only the ordering and atomicity guarantees that would apply to updates to two different tables within a single transaction. [#77084][#77084] -- A new [built-in scalar function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) `crdb_internal.active_version()` can now be used alongside `crdb_internal.is_at_least_version()` to determine which cluster version is currently active and choose client-side feature levels accordingly. [#77233][#77233] -- [`IMPORT INTO with AVRO`](https://www.cockroachlabs.com/docs/v22.1/import-into) now supports Avro files with the following Avro types: `long.time-micros`, `int.time-millis`, `long.timestamp-micros`,`long.timestamp-millis`, and `int.date`. This feature works only if the user has created a CockroachDB table with column types with match certain Avro type: `AVRO | CRDB | TIME | TIMESTAMP | DATE` [#76989][#76989] - -

DB Console changes

- -- [DB Console](https://www.cockroachlabs.com/docs/v22.1/ui-overview) now displays locality information in [problem ranges](https://www.cockroachlabs.com/docs/v22.1/ui-debug-pages#reports) and [range status](https://www.cockroachlabs.com/docs/v22.1/ui-replication-dashboard#ranges). [#76892][#76892] -- DB Console now displays `is_leaseholder` and `lease_valid` information in problem ranges and range status pages. [#76892][#76892] -- Added the Hot Ranges page and linked to it on the sidebar. [#77330][#77330] -- Removed stray parenthesis at the end of the duration time for a successful job. [#77438][#77438] - -

Bug fixes

- -- Previously, a bug caused the Open Transaction chart in the [Metrics Page](https://www.cockroachlabs.com/docs/v22.1/ui-overview#metrics) to constantly increase for empty transactions. This issue has now been fixed. [#77237][#77237] -- Previously, [draining nodes](https://www.cockroachlabs.com/docs/v22.1/node-shutdown#draining) in a cluster without shutting them down could stall foreground traffic in the cluster. This patch fixes this bug. [#77246][#77246] - -

Performance improvements

- -- Queries of the form `SELECT * FROM t1 WHERE filter_expression ORDER BY secondIndexColumn LIMIT n;` where there is a `NOT NULL CHECK` constraint of the form: `CHECK (firstIndexColumn IN (const_1, const_2, const_3...)` can now be rewritten as a `UNION ALL skip scan` to avoid the previously-required sort operation. [#76893][#76893] - -

Contributors

- -This release includes 108 merged PRs by 51 authors. - -[#74174]: https://github.com/cockroachdb/cockroach/pull/74174 -[#76837]: https://github.com/cockroachdb/cockroach/pull/76837 -[#76886]: https://github.com/cockroachdb/cockroach/pull/76886 -[#76892]: https://github.com/cockroachdb/cockroach/pull/76892 -[#76893]: https://github.com/cockroachdb/cockroach/pull/76893 -[#76917]: https://github.com/cockroachdb/cockroach/pull/76917 -[#76943]: https://github.com/cockroachdb/cockroach/pull/76943 -[#76989]: https://github.com/cockroachdb/cockroach/pull/76989 -[#76995]: https://github.com/cockroachdb/cockroach/pull/76995 -[#77084]: https://github.com/cockroachdb/cockroach/pull/77084 -[#77152]: https://github.com/cockroachdb/cockroach/pull/77152 -[#77233]: https://github.com/cockroachdb/cockroach/pull/77233 -[#77237]: https://github.com/cockroachdb/cockroach/pull/77237 -[#77246]: https://github.com/cockroachdb/cockroach/pull/77246 -[#77255]: https://github.com/cockroachdb/cockroach/pull/77255 -[#77306]: https://github.com/cockroachdb/cockroach/pull/77306 -[#77330]: https://github.com/cockroachdb/cockroach/pull/77330 -[#77438]: https://github.com/cockroachdb/cockroach/pull/77438 diff --git a/src/current/_includes/releases/v22.1/v22.1.0-alpha.4.md b/src/current/_includes/releases/v22.1/v22.1.0-alpha.4.md deleted file mode 100644 index 642798e2bb0..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.0-alpha.4.md +++ /dev/null @@ -1,101 +0,0 @@ -## v22.1.0-alpha.4 - -Release Date: March 21, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Security updates

- -- Clusters can be configured to send HSTS headers with HTTP requests in order to enable browser-level enforcement of HTTPS for the cluster host. This is controlled by setting the `server.hsts.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) to `true` (default: `false`). Once the headers are present, after an initial request, browsers will force HTTPS on all subsequent connections to the host. This reduces the possibility of man-in-the-middle (MitM) attacks, which HTTP-to-HTTPS redirects are vulnerable to. [#77244][#77244] - -

Enterprise edition changes

- -- Added a `created` time column to the `crdb_internal.active_range_feeds` virtual table to improve observability and debuggability of the rangefeed system. [#77597][#77597] -- [Incremental backups](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups#incremental-backups) created by [`BACKUP ... INTO`](https://www.cockroachlabs.com/docs/v22.1/backup) or [`BACKUP ... TO`](https://www.cockroachlabs.com/docs/v22.1/backup) are now stored by default under the path `/incrementals` within the backup destination, rather than under each backup's path. This enables easier management of cloud-storage provider policies specifically applied to incremental backups. [#75970][#75970] - -

SQL language changes

- -- Added a `sql.auth.resolve_membership_single_scan.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings), which changes the query for an internal role membership cache. Previously the code would recursively look up each role in the membership hierarchy, leading to multiple queries. With the setting on, it uses a single query. The setting is `true` by default. [#77359][#77359] -- The [data type](https://www.cockroachlabs.com/docs/v22.1/data-types) of shard columns created for [hash-sharded indexes](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) has changed from `INT4` to `INT8`. This should have no effect on behavior or performance. [#76930][#76930] -- Introduced the `sql.contention.resolver.queue_size` metric. This gauge metric gives the current length of the queue of contention events, each awaiting translation of its transaction ID into a transaction fingerprint ID. This metric can be used to assess the level of backlog unresolved contention events. [#77514][#77514] -- Introduced the `sql.contention.resolver.retries` metric. This counter metric reflects the number of retries performed by the contention event store attempting to translate the transaction ID of the contention event into a transaction fingerprint ID. Any spike in this metric could indicate a possible anomaly in the transaction ID resolution protocol. [#77514][#77514] -- Introduced the `sql.contention.resolver.failed_resolution` metric. This counter metric gives the total number of failed attempts to translate the transaction ID in the contention events into a transaction fingerprint ID. A spike in this metric indicates likely severe failure in the transaction ID resolution protocol. [#77514][#77514] -- Added support for `date_trunc(string, interval)` for compatibility with PostgreSQL. This built-in function is required to support [Django 4.1](https://docs.djangoproject.com/en/dev/releases/4.1/). [#77508][#77508] -- Introduced a `sql.contention.event_store.duration_threshold` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). This cluster setting specifies the minimum contention duration to cause the contention events to be collected into the `crdb_internal.transaction_contention_events` virtual table (default: `0`). [#77623][#77623] -- Added support for super region functionality. Super regions allow the user to define a set of regions on the database such that any `REGIONAL BY TABLE` based in the super region or any `REGIONAL BY ROW` partition in the super region will have all their replicas in regions within the super region. The primary use is for [data domiciling](https://www.cockroachlabs.com/docs/v22.1/data-domiciling). Super regions are an experimental feature and are gated behind the session variable: `enable_super_regions`. The [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `sql.defaults.super_regions.enabled` is used to enable super regions (default: `false`). [#76620][#76620] - -

Operational changes

- -- Added the `server.shutdown.connection_wait` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) to the [draining process](https://www.cockroachlabs.com/docs/v22.1/node-shutdown#node-shutdown-sequence) configuration. This adds a draining phase where the server waits for SQL connections to be closed, and once all SQL connections are closed before timeout, the server proceeds to the next draining phase. This provides a workaround when customers encounter intermittent blips and failed requests when performing operations that are related to restarting nodes. [#72991][#72991] -- The [cluster settings](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `admission.kv.tenant_weights.enabled` and `admission.kv.stores.tenant_weights.enabled` can now be used to enable tenant weights in multi-tenant storage servers (Defaults: `false`). Tenant weights are based on the number of ranges for each tenant, and allow for weighted fair sharing. [#77575][#77575] - -

Command-line changes

- -- The `cockroach debug tsdump` command now allows viewing time-series data even in case of node failures by being re-run with the import filename set to `-`. [#77247][#77247] - -

DB Console changes

- -- Added an alert banner on the [**Cluster Overview** page](https://www.cockroachlabs.com/docs/v22.1/ui-cluster-overview-page) that indicates when more than one node version is detected on the cluster. The alert lists the node versions detected and how many nodes are on each version. This provides more visibility into the progress of a cluster upgrade. [#76932][#76932] -- The **Compactions/Flushes** graph on the [Storage dashboard](https://www.cockroachlabs.com/docs/v22.1/ui-storage-dashboard) now shows bytes written by these operations, and has been split into separate per-node graphs. [#77558][#77558] -- The [**Explain Plan**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#explain-plans) tab of the [**Statement Details** page](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-details-page) now shows statistics for all the plans executed by the selected statement on the selected period. [#77632][#77632] -- Active operations can now be inspected in a new **Active operations** page linked from the [**Advanced Debug** page](https://www.cockroachlabs.com/docs/v22.1/ui-debug-pages). This facilitates viewing active traces and taking snapshots. [#77712][#77712] - -

Bug fixes

- -- Fixed a bug where clicking the "Reset SQL stats" button on the [**Statements**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [**Transactions**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) pages caused, in DB Console, an infinite loading spinner and, in CockroachDB {{ site.data.products.cloud }} Console, the **Statements**/**Transactions** table to be reloaded without limiting to the time range that the user had selected. The button now correctly reloads the table according to the selected time in both DB Console and CockroachDB {{ site.data.products.cloud }} Console. [#77571][#77571] -- Previously, the `information_schema` tables `administrable_role_authorizations` and `applicable_roles` were incorrectly always returning the current user for the grantee column. Now, the column will contain the correct role that was granted the parent role given in the `role_name` column. [#77359][#77359] -- Fixed a bug that caused errors when attempting to create table statistics (with [`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v22.1/create-statistics) or [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v22.1/explain-analyze)) for a table containing an index which indexed only [virtual computed columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns). This bug had been present since version v21.1.0. [#77507][#77507] -- All automatic jobs are now hidden from the [Jobs page](https://www.cockroachlabs.com/docs/v22.1/ui-jobs-page) of the DB Console. [#77331][#77331] -- Added a limit of 7 concurrent asynchronous consistency checks per store, with an upper timeout of 1 hour. This prevents abandoned consistency checks from building up in some circumstances, which could lead to increasing disk usage as they held onto Pebble snapshots. [#77433][#77433] -- Fixed a bug causing incorrect counts of `under_replicated_ranges` and `over_replicated_ranges` in the `crdb_internal.replication_stats` table for multi-region databases. [#76430][#76430] -- Previously, intermittent validation failures could be observed on schema objects, where a job ID was detected as missing when validating objects in a transaction. This has been fixed. [#76532][#76532] -- Previously, adding a [hash-sharded index](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) to a table watched by a changefeed could produce errors due to not distinguishing between backfills of visible columns and backfills of merely public ones, which may be hidden or inaccessible. This is now fixed. [#77316][#77316] -- Fixed a bug that caused internal errors when `COALESCE` and `IF` expressions had inner expressions with different types that could not be cast to a common type. [#77608][#77608] -- A zone config change event now includes the correct details of what was changed instead of incorrectly displaying `undefined`. [#77773][#77773] - -

Performance improvements

- -- Improved the [optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer)'s cardinality estimates for predicates involving many constrained columns. This may result in better index selection for these queries. [#76786][#76786] -- Improved jobs system resilience to scheduled jobs that may lock up jobs/scheduled job table for long periods of time. Each schedule now has a limited amount of time to complete its execution. The timeout is controlled via the `jobs.scheduler.schedule_execution.timeout` setting. [#77372][#77372] - -
- -

Contributors

- -This release includes 112 merged PRs by 50 authors. -We would like to thank the following contributors from the CockroachDB community: - -- Steve Kuznetsov (first-time contributor) - -
- -[#72991]: https://github.com/cockroachdb/cockroach/pull/72991 -[#75970]: https://github.com/cockroachdb/cockroach/pull/75970 -[#76430]: https://github.com/cockroachdb/cockroach/pull/76430 -[#76532]: https://github.com/cockroachdb/cockroach/pull/76532 -[#76620]: https://github.com/cockroachdb/cockroach/pull/76620 -[#76786]: https://github.com/cockroachdb/cockroach/pull/76786 -[#76897]: https://github.com/cockroachdb/cockroach/pull/76897 -[#76930]: https://github.com/cockroachdb/cockroach/pull/76930 -[#76932]: https://github.com/cockroachdb/cockroach/pull/76932 -[#77244]: https://github.com/cockroachdb/cockroach/pull/77244 -[#77247]: https://github.com/cockroachdb/cockroach/pull/77247 -[#77316]: https://github.com/cockroachdb/cockroach/pull/77316 -[#77331]: https://github.com/cockroachdb/cockroach/pull/77331 -[#77359]: https://github.com/cockroachdb/cockroach/pull/77359 -[#77372]: https://github.com/cockroachdb/cockroach/pull/77372 -[#77433]: https://github.com/cockroachdb/cockroach/pull/77433 -[#77507]: https://github.com/cockroachdb/cockroach/pull/77507 -[#77508]: https://github.com/cockroachdb/cockroach/pull/77508 -[#77514]: https://github.com/cockroachdb/cockroach/pull/77514 -[#77558]: https://github.com/cockroachdb/cockroach/pull/77558 -[#77571]: https://github.com/cockroachdb/cockroach/pull/77571 -[#77575]: https://github.com/cockroachdb/cockroach/pull/77575 -[#77597]: https://github.com/cockroachdb/cockroach/pull/77597 -[#77606]: https://github.com/cockroachdb/cockroach/pull/77606 -[#77608]: https://github.com/cockroachdb/cockroach/pull/77608 -[#77623]: https://github.com/cockroachdb/cockroach/pull/77623 -[#77632]: https://github.com/cockroachdb/cockroach/pull/77632 -[#77712]: https://github.com/cockroachdb/cockroach/pull/77712 -[#77773]: https://github.com/cockroachdb/cockroach/pull/77773 -[962cd2d26]: https://github.com/cockroachdb/cockroach/commit/962cd2d26 diff --git a/src/current/_includes/releases/v22.1/v22.1.0-alpha.5.md b/src/current/_includes/releases/v22.1/v22.1.0-alpha.5.md deleted file mode 100644 index bc76463644a..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.0-alpha.5.md +++ /dev/null @@ -1,69 +0,0 @@ -## v22.1.0-alpha.5 - -Release Date: March 28, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Enterprise edition changes

- -- [Changefeeds](https://www.cockroachlabs.com/docs/v22.1/monitor-and-debug-changefeeds) now record the message size histogram. [#77711][#77711] -- Users can now perform initial scans on [newly added changefeed](https://www.cockroachlabs.com/docs/v22.1/alter-changefeed) targets by executing the following statement: `ALTER CHANGEFEED ADD WITH initial_scan` - The default behavior is to perform no initial scans on newly added targets, but users can explicitly request this by replacing `initial_scan` with `no_initial_scan`. [#77263][#77263] -- The value of the `server.child_metrics.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) is now set to `true`. [#77561][#77561] -- CockroachDB now limits the number of concurrent catchup scan requests issued by [rangefeed](https://www.cockroachlabs.com/docs/v22.1/create-and-configure-changefeeds#enable-rangefeeds) clients. [#77866][#77866] - -

SQL language changes

- -- TTL metrics are now labelled by relation name if the `server.child_metrics.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) is enabled and the `ttl_label_metrics` storage parameter is set to `true`. This is to prevent a potentially unbounded cardinality on TTL related metrics. [#77567][#77567] -- Added support for the `MOVE` command, which moves a SQL cursor without fetching any rows from it. `MOVE` is identical to [`FETCH`](https://www.cockroachlabs.com/docs/v22.1/limit-offset), including in its arguments and syntax, except it doesn't return any rows. [#74877][#74877] -- Added the `enable_implicit_transaction_for_batch_statements` [session variable](https://www.cockroachlabs.com/docs/v22.1/set-vars). It defaults to false. When true, multiple statements in a single query (a "batch statement") will all run in the same implicit transaction, which matches the PostgreSQL wire protocol. This setting is provided for users who want to preserve the behavior of CockroachDB versions v21.2 and lower. [#77865][#77865] -- The `enable_implicit_transaction_for_batch_statements` session variable now defaults to false. [#77973][#77973] -- The `experimental_enable_hash_sharded_indexes` session variable is deprecated as hash-sharded indexes are enabled by default. Enabling this setting results in a no-op. [#78038][#78038] -- New `crdb_internal.merge_stats_metadata` [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) to group statement statistics metadata. [#78064][#78064] -- [Changefeeds](https://www.cockroachlabs.com/docs/v22.1/changefeeds-on-tables-with-column-families) can now specify column families to target, using the syntax `[TABLE] foo FAMILY bar`. For example, `CREATE CHANGEFEED FOR TABLE foo FAMILY bar, TABLE foo FAMILY baz, TABLE users` will create a feed that watches the `bar` and `baz` column families of `foo`, as well as the whole table `users`. A family must exist with that name when the feed is created. If all columns in a watched family are dropped in an `ALTER TABLE` statement, the feed will fail with an error, similar to dropping a table. The behavior is otherwise similar to feeds created using `split_column_families`. [#77964][#77964] -- [Casts](https://www.cockroachlabs.com/docs/v22.1/data-types#data-type-conversions-and-casts) that are affected by the `DateStyle` or `IntervalStyle` session variables used in [computed columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns) or [partial index](https://www.cockroachlabs.com/docs/v22.1/partial-indexes) definitions will be rewritten to use immutable functions after upgrading to v22.1. [#78229][#78229] -- When the user runs [`SHOW BACKUP`](https://www.cockroachlabs.com/docs/v22.1/show-backup) on an encrypted incremental backup, they must set the `encyrption_info_dir` directory to the full backup directory in order for `SHOW BACKUP` to work. [#78096][#78096] -- The [`BACKUP TO`](https://www.cockroachlabs.com/docs/v22.1/backup) syntax to take backups is deprecated, and will be removed in a future release. Create a backup collection using the `BACKUP INTO` syntax. [#78250][#78250] -- Using the [`RESTORE FROM`](https://www.cockroachlabs.com/docs/v22.1/restore) syntax without an explicit subdirectory pointing to a backup in a collection is deprecated, and will be removed in a future release. Use `RESTORE FROM IN ` to restore a particular backup in a collection. [#78250][#78250] - -

Command-line changes

- -- Fixed a bug where starting [`cockroach demo`](https://www.cockroachlabs.com/docs/v22.1/cockroach-demo) with the `--global` flag would not simulate latencies correctly when combined with the `--insecure` flag. [#78169][#78169] - -

DB Console changes

- -- Added full scan, distributed, and vectorized information of the plan displayed on [**Statement Details**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-details-page) page. [#78114][#78114] - -

Bug fixes

- -- Fixed successive [schema change](https://www.cockroachlabs.com/docs/v22.1/online-schema-changes) backfills from skipping spans that were checkpointed by an initial backfill that was restarted. [#77797][#77797] -- Fixed a bug where statements that arrived in a batch during the simple query protocol would all execute in their own implicit [transactions](https://www.cockroachlabs.com/docs/v22.1/transactions). Now, we match the PostgreSQL wire protocol behavior, so all these statements share the same implicit transaction. If a `BEGIN` is included in a statement batch, then the existing implicit transaction is upgraded to an explicit transaction. [#77865][#77865] -- Fixed a bug in the [optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) that prevented expressions of the form `(NULL::STRING[] <@ ARRAY['x'])` from being folded to `NULL`. This bug was introduced in v21.2.0. [#78042][#78042] -- Fixed broken links to the **Statement Details** page from the [**Advanced Debug**](https://www.cockroachlabs.com/docs/v22.1/ui-debug-pages) and [**Sessions**](https://www.cockroachlabs.com/docs/v22.1/ui-sessions-page) pages. [#78099][#78099] -- Fixed a memory leak in the [Pebble](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer#pebble) block cache. [#78260][#78260] - -

Contributors

- -This release includes 61 merged PRs by 31 authors. - -[#74877]: https://github.com/cockroachdb/cockroach/pull/74877 -[#77263]: https://github.com/cockroachdb/cockroach/pull/77263 -[#77561]: https://github.com/cockroachdb/cockroach/pull/77561 -[#77567]: https://github.com/cockroachdb/cockroach/pull/77567 -[#77711]: https://github.com/cockroachdb/cockroach/pull/77711 -[#77797]: https://github.com/cockroachdb/cockroach/pull/77797 -[#77865]: https://github.com/cockroachdb/cockroach/pull/77865 -[#77866]: https://github.com/cockroachdb/cockroach/pull/77866 -[#77964]: https://github.com/cockroachdb/cockroach/pull/77964 -[#77973]: https://github.com/cockroachdb/cockroach/pull/77973 -[#78038]: https://github.com/cockroachdb/cockroach/pull/78038 -[#78042]: https://github.com/cockroachdb/cockroach/pull/78042 -[#78064]: https://github.com/cockroachdb/cockroach/pull/78064 -[#78096]: https://github.com/cockroachdb/cockroach/pull/78096 -[#78099]: https://github.com/cockroachdb/cockroach/pull/78099 -[#78114]: https://github.com/cockroachdb/cockroach/pull/78114 -[#78169]: https://github.com/cockroachdb/cockroach/pull/78169 -[#78229]: https://github.com/cockroachdb/cockroach/pull/78229 -[#78249]: https://github.com/cockroachdb/cockroach/pull/78249 -[#78250]: https://github.com/cockroachdb/cockroach/pull/78250 -[#78260]: https://github.com/cockroachdb/cockroach/pull/78260 diff --git a/src/current/_includes/releases/v22.1/v22.1.0-beta.1.md b/src/current/_includes/releases/v22.1/v22.1.0-beta.1.md deleted file mode 100644 index d2301c94d6f..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.0-beta.1.md +++ /dev/null @@ -1,98 +0,0 @@ -## v22.1.0-beta.1 - -Release Date: April 4, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Backward-incompatible changes

- -- The volatility of cast operations between [strings](https://www.cockroachlabs.com/docs/v22.1/string) and [intervals](https://www.cockroachlabs.com/docs/v22.1/interval) or [timestamps](https://www.cockroachlabs.com/docs/v22.1/timestamp) has changed from immutable to stable. This means that these cast operations can no longer be used in computed columns or partial index definitions. Instead, use the following [built-in functions:](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) `parse_interval`, `parse_date`, `parse_time`, `parse_timetz`, `parse_timestamp`, or `to_char`. Upon upgrade to v22.1, CockroachDB will automatically rewrite any computed columns or partial indexes that use the affected casts to use the new built-in functions. [#78455][#78455] - -

Enterprise edition changes

- -- Tenant GC job will now wait for protected timestamp records that target the tenant and have a protect time less than the tenant's drop time. [#78389][#78389] -- Allow users to provide an end time for [changefeeds](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) through the `end_time` option. When this option is provided, the changefeed will run until it has reached the end timestamp the user specified, and then the changefeed job will end with a successful status code. Furthermore, we now provide an `initial_scan_only` option. When this option is set, the changefeed job will run until the initial scan has completed, and then end with a successful status code. [#78381][#78381] -- Do not block schema changes when executing core-style changefeeds. [#78360][#78360] - -

SQL language changes

- -- Added support for `ALTER DATABASE ... ALTER SUPER REGION`. This command allows the user to change the regions of an existing super region. For example, after successful execution of the following, super region `test1` will consist of three regions, `ca-central-1`, `us-west-1`, and `us-east-1`. - {% include_cached copy-clipboard.html %} - ~~~sql - ALTER DATABASE db3 ALTER SUPER REGION "test1" VALUES "ca-central-1", "us-west-1", "us-east-1"; - ~~~ - `ALTER SUPER REGION` follows the same rules as `ADD` or `DROP` super region. [#78462][#78462] - -- The [session variables](https://www.cockroachlabs.com/docs/v22.1/set-vars) `datestyle_enabled` and `intervalstyle_enabled`, and the [cluster settings](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `sql.defaults.datestyle.enabled` and `sql.defaults.intervalstyle.enabled` no longer have any effect. After upgrading to v22.1, these settings are effectively always interpreted as `true`. [#78455][#78455] -- `BUCKET_COUNT` for [hash-sharded index](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) is now shown from the `crdb_internal.table_indexes` table. [#78625][#78625] -- Implemented the [`COPY FROM ... ESCAPE ...`](https://www.cockroachlabs.com/docs/v22.1/copy-from) syntax. [#78417][#78417] -- Disabled index recommendations in [`EXPLAIN`](https://www.cockroachlabs.com/docs/v22.1/explain) output for [`REGIONAL BY ROW` tables](https://www.cockroachlabs.com/docs/v22.1/multiregion-overview#regional-by-row-tables), as the previous recommendations were not valid. [#78676][#78676] -- Added a `crdb_internal.validate_ttl_scheduled_jobs` built-in function. This verifies that each table points to a valid scheduled job which will action the deletion of expired rows. [#78373][#78373] -- Added a `crdb_internal.repair_ttl_table_scheduled_job` built-in function, which repairs the given TTL table's scheduled job by supplanting it with a valid schedule. [#78373][#78373] - -

Operational changes

- -- Added a new metric that charts the number of bytes received via snapshot on any given store. [#78464][#78464] -- Bulk ingest operations like [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import), [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore) or [`CREATE INDEX`](https://www.cockroachlabs.com/docs/v22.1/create-index) will now fail if they try to write to a node that has less than 5% storage capacity remaining, configurable via the [`kv.bulk_io_write.min_capacity_remaining_fraction`](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). [#78579][#78579] -- [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import) jobs will now [pause](https://www.cockroachlabs.com/docs/v22.1/pause-job) if a node runs out of disk space. [#78587][#78587] -- [`CREATE INDEX`](https://www.cockroachlabs.com/docs/v22.1/create-index) and some other schema changes will now [pause](https://www.cockroachlabs.com/docs/v22.1/pause-job) if a node is running out of disk space. [#78587][#78587] -- [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore) will now [pause](https://www.cockroachlabs.com/docs/v22.1/pause-job) if a node is running out of disk space. [#78587][#78587] - -

Command-line changes

- -- [`cockroach demo`](https://www.cockroachlabs.com/docs/v22.1/cockroach-demo) is reverted back to not run multi-tenant mode by default. [#78168][#78168] - -

DB Console changes

- -- The [Replication Dashboard](https://www.cockroachlabs.com/docs/v22.1/ui-replication-dashboard) now includes a graph of snapshot bytes received per node. [#78580][#78580] -- The [`_status/nodes` endpoint](https://www.cockroachlabs.com/docs/v22.1/monitoring-and-alerting) is now available to all users with the `VIEWACTIVITY` role option, not just admins. Also, in the DB Console, the **Nodes Overview** and **Node Reports** pages will now display unredacted information containing node hostnames and IP addresses for all users with the `VIEWACTIVITY` role option.[#78362][#78362] -- Improved colors for status badges on the [Jobs](https://www.cockroachlabs.com/docs/v22.1/ui-jobs-page) page. Three statuses on the Jobs page, `cancel-requested`, `pause-requested`, and `revert-failed`, previously had blue status badge colors that didn't properly reflect their meaning. This commit modifies the badge colors to indicate meaning. Now `cancel-requested` and `pause-requested` have gray badges and `revert-failed` has a red badge. [#78611][#78611] -- Fixed a bug where a node in the `UNAVAILABLE` state would not have latency defined, causing the network page to crash. [#78628][#78628] - -

Bug fixes

- -- CockroachDB may now fetch fewer rows when performing lookup and index joins on queries with a `LIMIT` clause. [#78473][#78473] -- Fixed a bug whereby certain catalog interactions which occurred concurrently with node failures were not internally retried. [#78698][#78698] -- Fixed a bug that caused the optimizer to generate invalid query plans which could result in incorrect query results. The bug, which has been present since version v21.1.0, can appear if all of the following conditions are true: - 1. The query contains a semi-join, such as queries in the form: `SELECT * FROM t1 WHERE EXISTS (SELECT * FROM t2 WHERE t1.a = t2.a);`. - 1. The inner table has an index containing the equality column, like `t2.a` in the example query. - 1. The index contains one or more columns that prefix the equality column. - 1. The prefix columns are `NOT NULL` and are constrained to a set of constant values via a `CHECK` constraint or an `IN` condition in the filter. [#78972][#78972] -- Fixed a bug where the `LATEST` file that points to the latest full [backup](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups#full-backups) in a collection was written to a directory path with the wrong structure. [#78281][#78281] - -

Performance improvements

- -- [Ranges](https://www.cockroachlabs.com/docs/v22.1/show-ranges) are split and rebalanced during bulk ingestion only when they become full, reducing unnecessary splits and merges. [#78328][#78328] -- Unused JS files are no longer downloaded when the DB Console loads. [#78665][#78665] - -

Contributors

- -This release includes 82 merged PRs by 40 authors. - -[#78168]: https://github.com/cockroachdb/cockroach/pull/78168 -[#78281]: https://github.com/cockroachdb/cockroach/pull/78281 -[#78328]: https://github.com/cockroachdb/cockroach/pull/78328 -[#78360]: https://github.com/cockroachdb/cockroach/pull/78360 -[#78362]: https://github.com/cockroachdb/cockroach/pull/78362 -[#78373]: https://github.com/cockroachdb/cockroach/pull/78373 -[#78381]: https://github.com/cockroachdb/cockroach/pull/78381 -[#78389]: https://github.com/cockroachdb/cockroach/pull/78389 -[#78417]: https://github.com/cockroachdb/cockroach/pull/78417 -[#78455]: https://github.com/cockroachdb/cockroach/pull/78455 -[#78462]: https://github.com/cockroachdb/cockroach/pull/78462 -[#78464]: https://github.com/cockroachdb/cockroach/pull/78464 -[#78473]: https://github.com/cockroachdb/cockroach/pull/78473 -[#78536]: https://github.com/cockroachdb/cockroach/pull/78536 -[#78565]: https://github.com/cockroachdb/cockroach/pull/78565 -[#78579]: https://github.com/cockroachdb/cockroach/pull/78579 -[#78580]: https://github.com/cockroachdb/cockroach/pull/78580 -[#78587]: https://github.com/cockroachdb/cockroach/pull/78587 -[#78611]: https://github.com/cockroachdb/cockroach/pull/78611 -[#78625]: https://github.com/cockroachdb/cockroach/pull/78625 -[#78628]: https://github.com/cockroachdb/cockroach/pull/78628 -[#78665]: https://github.com/cockroachdb/cockroach/pull/78665 -[#78676]: https://github.com/cockroachdb/cockroach/pull/78676 -[#78698]: https://github.com/cockroachdb/cockroach/pull/78698 -[#78700]: https://github.com/cockroachdb/cockroach/pull/78700 -[#78972]: https://github.com/cockroachdb/cockroach/pull/78972 -[6832dd1c9]: https://github.com/cockroachdb/cockroach/commit/6832dd1c9 diff --git a/src/current/_includes/releases/v22.1/v22.1.0-beta.2.md b/src/current/_includes/releases/v22.1/v22.1.0-beta.2.md deleted file mode 100644 index 5544e010c6b..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.0-beta.2.md +++ /dev/null @@ -1,149 +0,0 @@ -## v22.1.0-beta.2 - -Release Date: April 12, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Enterprise edition changes

- -- Job scheduler is more efficient and should no longer lock up jobs and scheduled jobs tables. [#79328][#79328] -- Removed the default values from the [`SHOW CHANGEFEED JOBS`](https://www.cockroachlabs.com/docs/v22.1/show-jobs#show-changefeed-jobs) output. [#79361][#79361] -- Checkpoint files are no longer overwritten and now versioned and written side-by-side in the `/progress` directory. Temporary checkpoint files are no longer written. [#79314][#79314] -- Changefeeds can now be distributed across pods in tenant environments. [#79303][#79303] - -

SQL language changes

- -- Help text for creating indexes or primary key constraints no longer mentions `BUCKET_COUNT` because it can now be omitted and a default is used. [#79087][#79087] -- Add support for show default privileges in schema. The [`SHOW DEFAULT PRIVILEGES`](https://www.cockroachlabs.com/docs/v22.1/show-default-privileges) clause now supports optionally passing a schema name: `SHOW DEFAULT PRIVILEGES [opt_for_role] [opt_schema_name]` Example: - - ~~~ sql - SHOW DEFAULT PRIVILEGES IN SCHEMA s2 - ~~~ - ~~~ - ---- - role for_all_roles object_type grantee privilege_type - testuser false tables testuser2 DROP - testuser false tables testuser2 SELECT - testuser false tables testuser2 UPDATE - ~~~ - ~~~ sql - SHOW DEFAULT PRIVILEGES FOR ROLE testuser IN SCHEMA s2 - ~~~ - ~~~ - ---- - role for_all_roles object_type grantee privilege_type - testuser false tables testuser2 DROP - testuser false tables testuser2 SELECT - testuser false tables testuser2 UPDATE - ~~~ - [#79177][#79177] - -- Add support for `SHOW SUPER REGIONS FROM DATABASE`. Example: - - ~~~ sql - SHOW SUPER REGIONS FROM DATABASE mr2 - ~~~ - ~~~ - ---- - mr2 ca-central-sr {ca-central-1} - mr2 test {ap-southeast-2,us-east-1} - ~~~ - [#79190][#79190] -- When you run [`SHOW BACKUP`](https://www.cockroachlabs.com/docs/v22.1/show-backup) on collections you must now use the `FROM` keyword: `SHOW BACKUP FROM IN `. [#79116][#79116] -- `SHOW BACKUP` without the `IN` keyword to specify a subdirectory is deprecated and will be removed from a future release. Users are recommended to only create collection based backups and view them with `SHOW BACKUP FROM IN `. [#79116][#79116] -- Add extra logging for copy to the [`SQL_EXEC`](https://www.cockroachlabs.com/docs/v22.1/logging-overview#logging-channels) channel if the `sql.trace.log_statement_execute` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) is set. [#79298][#79298] -- An error message is now logged to the `SQL_EXEC` channel when parsing fails. [#79298][#79298] -- Introduced a `expect_and_ignore_not_visible_columns_in_copy` [session variable](https://www.cockroachlabs.com/docs/v22.1/set-vars). If this is set, [`COPY FROM`](https://www.cockroachlabs.com/docs/v22.1/copy-from) with no column specifiers will assume hidden columns are in the copy data, but will ignore them when applying `COPY FROM`. [#79189][#79189] -- Changes the default value of `sql.zone_configs.allow_for_secondary_tenant.enabled` to be `false`. Moreover, this setting is no longer settable by secondary tenants. Instead, it's now a tenant read-only cluster setting. [#79160][#79160] -- [`SHOW BACKUP`](https://www.cockroachlabs.com/docs/v22.1/show-backup) now reports accurate row and byte size counts on backups created by a tenant. [#79339][#79339] -- Memory and disk usage are now reported for the lookup joins in [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v22.1/explain-analyze). [#79351][#79351] -- Privileges on a database are no longer inherited to tables/schemas if a table/schema is created in that database. For example, `GRANT ALL ON DATABASE TEST TO foo`; `CREATE TABLE test.t()` no longer results in `foo` having `ALL` on the table. Users should rely on default privileges instead. You can achieve the same behavior by doing `USE test; ALTER DEFAULT PRIVILEGES GRANT ALL ON TABLES TO foo;` [#79509][#79509] -- The `InvalidPassword` error code is now returned if the password is invalid or the user does not exist when authenticating. [#79515][#79515] - -

Operational changes

- -- The `kv.allocator.load_based_rebalancing_interval` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) now lets operators set the interval at which each store in the cluster will check for load-based lease or replica rebalancing opportunities. [#79073][#79073] -- [Rangefeed](https://www.cockroachlabs.com/docs/v22.1/create-and-configure-changefeeds#enable-rangefeeds) memory budgets have a [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `kv.rangefeed.memory_budgets.enabled` that disables memory budgeting for all new feeds. This setting could be used on CockroachDB {{ site.data.products.dedicated }} clusters to disable budgeting as a mitigation for bugs for example if feeds abort while nodes have sufficient free memory. [#79321][#79321] -- Rangefeed memory budgets could be disabled on the fly when cluster setting is changed without the need to restart the feed. [#79321][#79321] - -

DB Console changes

- -- Minor styling changes on [**Hot Ranges**](https://www.cockroachlabs.com/docs/v22.1/ui-hot-ranges-page) page to follow the same style as other pages. [#79501][#79501] -- On the [**Statement Details**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-details-page) page, changed the order of tabs to **Overview**, **Explain Plan**, **Diagnostics**, and **Execution Stats** and changed the **Explain Plan** tab to **Explain Plan**s (plural). [#79234][#79234] - -

Bug fixes

- -- Fixes a NPE during the cleanup of a failed or cancelled [`RESTORE` ](https://www.cockroachlabs.com/docs/v22.1/restore) job. [#78992][#78992] -- Fix [`num_runs`](https://www.cockroachlabs.com/docs/v22.1/show-jobs) being incremented twice for certain jobs upon being started. [#79052][#79052] -- A bug has been fixed that caused errors when trying to evaluate queries with `NULL` values annotated as a tuple type, such as `NULL:::RECORD`. This bug was present since version 19.1. [#78531][#78531] -- [`ALTER TABLE [ADD|DROP] COLUMN`](https://www.cockroachlabs.com/docs/v22.1/alter-table) are now subject to [admission control](https://www.cockroachlabs.com/docs/v22.1/admission-control), which will prevent these operations from overloading the storage engine. [#79209][#79209] -- Index usage stats are now properly captured for index joins. [#79241][#79241] -- [`SHOW SCHEMAS FROM `](https://www.cockroachlabs.com/docs/v22.1/show-schemas) now includes user-defined schemas. [#79308][#79308] -- A distributed query that results in an error on the remote node no longer has an incomplete trace. [#79193][#79193] -- [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v22.1/import-into) no longer creates duplicate entries with [`UNIQUE`](https://www.cockroachlabs.com/docs/v22.1/unique) constraints in [`REGIONAL BY ROW` tables](https://www.cockroachlabs.com/docs/v22.1/multiregion-overview#regional-by-row-tables) and tables utilizing `UNIQUE WITHOUT INDEX` constraints. A new post-`IMPORT` validation step for those tables now fails and rolls back the `IMPORT` in such cases. [#79323][#79323] -- Fixed a bug in IO which could result in [admission control](https://www.cockroachlabs.com/docs/v22.1/admission-control) failing to rate limit when traffic was stalled such that no work was admitted, despite the store's being in an unhealthy state. [#79343][#79343] -- The execution time as reported on [`DISTSQL`](https://www.cockroachlabs.com/docs/v22.1/explain-analyze#explain-analyze-distsql) diagrams within the statement bundle collected via [`EXPLAIN ANALYZE (DEBUG)`](https://www.cockroachlabs.com/docs/v22.1/explain-analyze#debug-option) is no longer negative when the statement encountered an error. [#79373][#79373] -- CockroachDB reports fewer "memory budget exceeded" errors when performing [lookup joins](https://www.cockroachlabs.com/docs/v22.1/joins#lookup-joins). [#79351][#79351] -- `LIMIT` queries with an `ORDER BY` clause that scan the index of a virtual system tables, such as `pg_type`, no longer return incorrect results. [#79460][#79460] -- [`nextval` and `setval`](https://www.cockroachlabs.com/docs/v22.1/create-sequence#sequence-functions) are non-transactional except when it is called in the same transaction that the sequence was created in. This change prevents a bug where creating a sequence and calling `nextval` and `setval` on it within a transaction caused the query containing `nextval` to hang. [#79506][#79506] -- A bug has been fixed that caused the optimizer to generate query plans with logically incorrect lookup joins. The bug can only occur in queries with an inner join, e.g., `t1 JOIN t2`, if all of the following are true: - - - The join contains an equality condition between columns of both tables, e.g., `t1.a = t2.a`. - - A query filter or `CHECK` constraint constrains a column to a set of specific values, e.g., `t2.b IN (1, 2, 3)`. In the case of a `CHECK` constraint, the column must be `NOT NULL`. - - A query filter or `CHECK` constraint constrains a column to a range, e.g., `t2.c > 0`. In the case of a `CHECK` constraint, the column must be `NOT NULL`. - - An index contains a column from each of the criteria above, e.g., `INDEX t2(a, b, c)`. This bug has been present since version 21.2.0. [#79504][#79504] -- A bug has been fixed which caused the optimizer to generate invalid query plans which could result in incorrect query results. The bug, which has been present since v21.1.0, can appear if all of the following conditions are true: - - - The query contains a semi-join, such as queries in the form `SELECT * FROM a WHERE EXISTS (SELECT * FROM b WHERE a.a @> b.b)`. - - The inner table has a multi-column inverted index containing the inverted column in the filter. - - The index prefix columns are constrained to a set of values via the filter or a `CHECK` constraint, e.g., with an `IN` operator. In the case of a `CHECK` constraint, the column is `NOT NULL`. - [#79504][#79504] - -

Performance improvements

- -- Uniqueness checks performed for inserts into [`REGIONAL BY ROW` tables](https://www.cockroachlabs.com/docs/v22.1/multiregion-overview#regional-by-row-tables) no longer search all regions for duplicates. In some cases, these checks will now only search a subset of regions when inserting a single row of constant values. [#79251][#79251] -- Bulk ingestion writes now use a lower priority for [admission control](https://www.cockroachlabs.com/docs/v22.1/admission-control). [#79352][#79352] -- Browser caching of files loaded in DB Console is now supported. [#79382][#79382] - -

Contributors

- -This release includes 84 merged PRs by 43 authors. - -[#78531]: https://github.com/cockroachdb/cockroach/pull/78531 -[#78992]: https://github.com/cockroachdb/cockroach/pull/78992 -[#79052]: https://github.com/cockroachdb/cockroach/pull/79052 -[#79073]: https://github.com/cockroachdb/cockroach/pull/79073 -[#79087]: https://github.com/cockroachdb/cockroach/pull/79087 -[#79116]: https://github.com/cockroachdb/cockroach/pull/79116 -[#79160]: https://github.com/cockroachdb/cockroach/pull/79160 -[#79177]: https://github.com/cockroachdb/cockroach/pull/79177 -[#79189]: https://github.com/cockroachdb/cockroach/pull/79189 -[#79190]: https://github.com/cockroachdb/cockroach/pull/79190 -[#79193]: https://github.com/cockroachdb/cockroach/pull/79193 -[#79209]: https://github.com/cockroachdb/cockroach/pull/79209 -[#79241]: https://github.com/cockroachdb/cockroach/pull/79241 -[#79251]: https://github.com/cockroachdb/cockroach/pull/79251 -[#79298]: https://github.com/cockroachdb/cockroach/pull/79298 -[#79303]: https://github.com/cockroachdb/cockroach/pull/79303 -[#79308]: https://github.com/cockroachdb/cockroach/pull/79308 -[#79311]: https://github.com/cockroachdb/cockroach/pull/79311 -[#79314]: https://github.com/cockroachdb/cockroach/pull/79314 -[#79321]: https://github.com/cockroachdb/cockroach/pull/79321 -[#79323]: https://github.com/cockroachdb/cockroach/pull/79323 -[#79328]: https://github.com/cockroachdb/cockroach/pull/79328 -[#79333]: https://github.com/cockroachdb/cockroach/pull/79333 -[#79339]: https://github.com/cockroachdb/cockroach/pull/79339 -[#79343]: https://github.com/cockroachdb/cockroach/pull/79343 -[#79351]: https://github.com/cockroachdb/cockroach/pull/79351 -[#79352]: https://github.com/cockroachdb/cockroach/pull/79352 -[#79361]: https://github.com/cockroachdb/cockroach/pull/79361 -[#79373]: https://github.com/cockroachdb/cockroach/pull/79373 -[#79377]: https://github.com/cockroachdb/cockroach/pull/79377 -[#79382]: https://github.com/cockroachdb/cockroach/pull/79382 -[#79460]: https://github.com/cockroachdb/cockroach/pull/79460 -[#79501]: https://github.com/cockroachdb/cockroach/pull/79501 -[#79234]: https://github.com/cockroachdb/cockroach/pull/79234 -[#79504]: https://github.com/cockroachdb/cockroach/pull/79504 -[#79506]: https://github.com/cockroachdb/cockroach/pull/79506 -[#79509]: https://github.com/cockroachdb/cockroach/pull/79509 -[#79515]: https://github.com/cockroachdb/cockroach/pull/79515 diff --git a/src/current/_includes/releases/v22.1/v22.1.0-beta.3.md b/src/current/_includes/releases/v22.1/v22.1.0-beta.3.md deleted file mode 100644 index d880f62499a..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.0-beta.3.md +++ /dev/null @@ -1,80 +0,0 @@ -## v22.1.0-beta.3 - -Release Date: April 18, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Enterprise edition changes

- -- Unified the syntax for defining the behavior of initial scans on [changefeeds](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) by extending the [`initial_scan`](https://www.cockroachlabs.com/docs/v22.1/create-changefeed#initial-scan) option to accept three possible values: `yes`, `no`, or `only`. [#79471][#79471] -- Changefeeds can now target tables with [more than one column family](https://www.cockroachlabs.com/docs/v22.1/changefeeds-on-tables-with-column-families) using either the [`split_column_families` option](https://www.cockroachlabs.com/docs/v22.1/create-changefeed#split-column-families) or the `FAMILY` keyword. Changefeeds will emit individual messages per column family on a table. [#79448][#79448] -- The `full_table_name` option is now supported for all [changefeed](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) sinks. [#79448][#79448] -- `LATEST` files are no longer overwritten and are now versioned and written in the `/metadata/latest` directory for non-mixed-version clusters. [#79553][#79553] -- Previously, the [`ALTER CHANGEFEED`](https://www.cockroachlabs.com/docs/v22.1/alter-changefeed) statement would not work with changefeeds that use fully qualified names in their [`CREATE CHANGEFEED`](https://www.cockroachlabs.com/docs/v22.1/create-changefeed) statement. This is now fixed by ensuring that each existing target is added with its fully qualified name so that it can be resolved in validation checks. Every changefeed will now display the fully qualified name of every target in the [`SHOW CHANGEFEED JOB`](https://www.cockroachlabs.com/docs/v22.1/show-jobs) query. [#79745][#79745] -- Added a `changefeed.backfill.scan_request_size` setting to control scan request size during [backfill](https://www.cockroachlabs.com/docs/v22.1/changefeed-messages#schema-changes-with-column-backfill). [#79710][#79710] - -

SQL language changes

- -- CockroachDB now ensures the user passes the same number of locality-aware URIs for the full [backup](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups) destination as the `incremental_location` parameter (for example, `BACKUP INTO LATEST IN ($1, $2, $3) WITH incremental_location = ($4, $5, $6)`). [#79600][#79600] -- `EXPLAIN (DDL)`, when invoked on statements supported by the declarative schema changer, prints a plan of what the schema changer will do. This can be useful for anticipating the complexity of a schema change (for example, anything involving backfill or validation operations might be slow to run) and for troubleshooting. `EXPLAIN (DDL, VERBOSE)` produces a more detailed plan. [#79780][#79780] - -

Operational changes

- -- Added a new time-series metric, `storage.marked-for-compaction-files`, for the count of files marked for compaction. This is useful for monitoring storage-level background migrations. [#79370][#79370] -- [Changefeed](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) creation and failure event logs are now emitted to the [`TELEMETRY`](https://www.cockroachlabs.com/docs/v22.1/eventlog#telemetry-events) logging channel. [#79749][#79749] - -

Command-line changes

- -- Introduced a new `ttllogger` [workload](https://www.cockroachlabs.com/docs/v22.1/cockroach-workload) which creates a TTL table emulating a "log" with rows expiring after the duration specified in the `--ttl` flag. [#79482][#79482] - -

DB Console changes

- -- The [Hot Ranges page](https://www.cockroachlabs.com/docs/v22.1/ui-hot-ranges-page) now allows filtering by column. [#79647][#79647] -- Added status of automatic statistics collection to the [Databases](https://www.cockroachlabs.com/docs/v22.1/ui-databases-page) and Databases [table details](https://www.cockroachlabs.com/docs/v22.1/ui-databases-page#table-details) pages. [#76168][#76168] -- Added timestamp of last statistics collection to the Databases > [Tables](https://www.cockroachlabs.com/docs/v22.1/ui-databases-page#tables-view) and Databases table details pages. [#76168][#76168] - -

Bug fixes

- -- Previously, privileges for restored tables were being generated incorrectly without taking into consideration their parent schema's default privilege descriptor. This is now fixed. [#79534][#79534] -- Fixed a bug that caused an internal error when the inner expression of a column access expression evaluated to `NULL`. For example, evaluation of the expression `(CASE WHEN b THEN ((ROW(1) AS a)) ELSE NULL END).a` would error when `b` is `false`. This bug was present since v19.1 or earlier. [#79529][#79529] -- Fixed a bug that caused an error when accessing a named column of a labeled tuple. The bug only occurred when an expression could produce one of several different tuples. For example, `(CASE WHEN true THEN (ROW(1) AS a) ELSE (ROW(2) AS a) END).a` would fail to evaluate. This bug was present since v22.1.0. Although present in previous versions, it was impossible to encounter due to limitations that prevented using tuples in this way. [#79529][#79529] -- Previously, queries reading from an index or primary key on `FLOAT` or `REAL` columns `DESC` would read `-0` for every `+0` value stored in the index. This has been fixed to correctly read `+0` for `+0` and `-0` for `-0`. [#79533][#79533] -- Fixed some cases where a job or schema change that had encountered an error would continue to execute for some time before eventually failing. [#79713][#79713] -- Previously, the optional `is_called` parameter of the `setval` function would default to `false` when not specified. It now defaults to `true` to match PostgreSQL behavior. [#79779][#79779] -- On the [Raft Messages](https://www.cockroachlabs.com/docs/v22.1/ui-debug-pages) page, the date picker and drag-to-zoom functionality are now fixed. [#79791][#79791] -- Fixed a bug where Pebble compaction heuristics could allow a large compaction backlog to accumulate, eventually leading to high read amplification. [#79597][#79597] - -

Contributors

- -This release includes 49 merged PRs by 34 authors. - -[#79370]: https://github.com/cockroachdb/cockroach/pull/79370 -[#79448]: https://github.com/cockroachdb/cockroach/pull/79448 -[#79458]: https://github.com/cockroachdb/cockroach/pull/79458 -[#79471]: https://github.com/cockroachdb/cockroach/pull/79471 -[#79482]: https://github.com/cockroachdb/cockroach/pull/79482 -[#79529]: https://github.com/cockroachdb/cockroach/pull/79529 -[#79533]: https://github.com/cockroachdb/cockroach/pull/79533 -[#79534]: https://github.com/cockroachdb/cockroach/pull/79534 -[#79553]: https://github.com/cockroachdb/cockroach/pull/79553 -[#79562]: https://github.com/cockroachdb/cockroach/pull/79562 -[#79597]: https://github.com/cockroachdb/cockroach/pull/79597 -[#79600]: https://github.com/cockroachdb/cockroach/pull/79600 -[#79647]: https://github.com/cockroachdb/cockroach/pull/79647 -[#79710]: https://github.com/cockroachdb/cockroach/pull/79710 -[#79713]: https://github.com/cockroachdb/cockroach/pull/79713 -[#79722]: https://github.com/cockroachdb/cockroach/pull/79722 -[#79742]: https://github.com/cockroachdb/cockroach/pull/79742 -[#79745]: https://github.com/cockroachdb/cockroach/pull/79745 -[#79749]: https://github.com/cockroachdb/cockroach/pull/79749 -[#79779]: https://github.com/cockroachdb/cockroach/pull/79779 -[#79780]: https://github.com/cockroachdb/cockroach/pull/79780 -[#79782]: https://github.com/cockroachdb/cockroach/pull/79782 -[#79791]: https://github.com/cockroachdb/cockroach/pull/79791 -[#76168]: https://github.com/cockroachdb/cockroach/pull/76168 -[30d477495]: https://github.com/cockroachdb/cockroach/commit/30d477495 -[528f0d8bf]: https://github.com/cockroachdb/cockroach/commit/528f0d8bf -[5e7fb2304]: https://github.com/cockroachdb/cockroach/commit/5e7fb2304 -[5fa73a530]: https://github.com/cockroachdb/cockroach/commit/5fa73a530 -[7cf738118]: https://github.com/cockroachdb/cockroach/commit/7cf738118 -[aafe68e31]: https://github.com/cockroachdb/cockroach/commit/aafe68e31 diff --git a/src/current/_includes/releases/v22.1/v22.1.0-beta.4.md b/src/current/_includes/releases/v22.1/v22.1.0-beta.4.md deleted file mode 100644 index da18c6ca537..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.0-beta.4.md +++ /dev/null @@ -1,55 +0,0 @@ -## v22.1.0-beta.4 - -Release Date: April 26, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Backward-incompatible changes

- -- Users can no longer define the subdirectory of their full backup. This deprecated syntax can be enabled by changing the new `bulkio.backup.deprecated_full_backup_with_subdir` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) to `true`. [#80145][#80145] - -

SQL language changes

- -- Introduced a new [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings), `sql.multi_region.allow_abstractions_for_secondary_tenants.enabled`, to allow the operator to control if a secondary tenant can make use of [multi-region abstractions](https://www.cockroachlabs.com/docs/v22.1/migrate-to-multiregion-sql#replication-zone-patterns-and-multi-region-sql-abstractions). [#80013][#80013] -- Introduced new `cloudstorage..write.node_rate_limit` and `cloudstorage..write.node_burst_limit` [cluster settings](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) to limit the rate at which bulk operations write to the designated cloud storage provider. [#80243][#80243] - -

Command-line changes

- -- [`COPY ... FROM STDIN`](https://www.cockroachlabs.com/docs/v22.1/copy-from) now works from the [`cockroach` CLI](https://www.cockroachlabs.com/docs/v22.1/cockroach-commands). Note that it is not supported inside transactions. [#79819][#79819] -- The mechanism for query cancellation is disabled in the [`sql` shell](https://www.cockroachlabs.com/docs/v22.1/cockroach-sql) until a later patch release. [#79740][#79740] - -

DB Console changes

- -- Statements are no longer separated by aggregation interval on the [Statement Page](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page). Now, all statements with the same fingerprint show as a single row. [#80137][#80137] - -

Operational changes

- -- If a user does not pass a subdirectory in their backup command, CockroachDB will only ever attempt to create a full backup. Previously, a backup command with [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v22.1/as-of-system-time) and no subdirectory would increment on an existing backup if the `AS OF SYSTEM TIME` backup’s resolved subdirectory equaled the existing backup’s directory. Now, an error is thrown. [#80145][#80145] - -

Bug fixes

- -- HTTP 304 responses no longer result in error logs. [#79855][#79855] -- Fixed a bug that may have caused a panic if a Kafka server being written to by a [`changefeed`](https://www.cockroachlabs.com/docs/v22.1/changefeed-sinks) failed at the wrong moment. [#79908][#79908] -- Fixed a bug that would prevent CockroachDB from resolving the public schema if a [`changefeed`](https://www.cockroachlabs.com/docs/v22.1/changefeed-sinks) is created with a cursor timestamp prior to when the public schema migration happened. [#80165][#80165] -- Fixed a bug where running an [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v22.1/as-of-system-time) incremental backup with an end time earlier than the previous backup's end time could lead to an incremental backup chain in the wrong order. Now, an error is thrown if the time specified in `AS OF SYSTEM TIME` is earlier than the previous backup's end time. [#80145][#80145] - -

Performance improvements

- -- Running multiple [schema changes](https://www.cockroachlabs.com/docs/v22.1/online-schema-changes) concurrently is now more efficient. [#79950][#79950] -- Performing a rollback of a [`CREATE TABLE AS`](https://www.cockroachlabs.com/docs/v22.1/create-table-as) statement with large quantities of data has similar performance to using [`DROP TABLE`](https://www.cockroachlabs.com/docs/v22.1/drop-table). [#79601][#79601] - -

Contributors

- -This release includes 38 merged PRs by 27 authors. - -[#79601]: https://github.com/cockroachdb/cockroach/pull/79601 -[#79740]: https://github.com/cockroachdb/cockroach/pull/79740 -[#79819]: https://github.com/cockroachdb/cockroach/pull/79819 -[#79855]: https://github.com/cockroachdb/cockroach/pull/79855 -[#79908]: https://github.com/cockroachdb/cockroach/pull/79908 -[#79950]: https://github.com/cockroachdb/cockroach/pull/79950 -[#80013]: https://github.com/cockroachdb/cockroach/pull/80013 -[#80137]: https://github.com/cockroachdb/cockroach/pull/80137 -[#80145]: https://github.com/cockroachdb/cockroach/pull/80145 -[#80165]: https://github.com/cockroachdb/cockroach/pull/80165 -[#80243]: https://github.com/cockroachdb/cockroach/pull/80243 diff --git a/src/current/_includes/releases/v22.1/v22.1.0-beta.5.md b/src/current/_includes/releases/v22.1/v22.1.0-beta.5.md deleted file mode 100644 index 13cd4b5b09f..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.0-beta.5.md +++ /dev/null @@ -1,51 +0,0 @@ -## v22.1.0-beta.5 - -Release Date: May 3, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Security updates

- -- `crdb_internal.reset_sql_stats()` and `crdb_internal.reset_index_usage_stats()` [built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#system-info-functions) now check if the user has the [admin role](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#admin-role). [#80384][#80384] - -- SCRAM authentication and password encryption are not enabled by default. [#80248][#80248] - -

Enterprise edition changes

- -- [Backups](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups) run by secondary tenants now write protected timestamp records to protect their target schema objects from garbage collection during backup execution. [#80670][#80670] - -

SQL language changes

- -- The [cluster settings](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `cloudstorage..read.node_rate_limit` and `cloudstorage..read.node_burst_limit` can now be used to limit throughput when reading from cloud storage during a [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore) or [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import). [#80281][#80281] - -

Bug fixes

- -- Fixed a bug where automatic encryption-at-rest data key rotation would become disabled after a node restart without a store key rotation. [#80564][#80564] - -- Fixed a bug whereby the cluster version could regress due to a race condition. [#80712][#80712] - -

Performance improvements

- -- Bulk ingestion of unsorted data during [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import) and schema changes now uses a higher level of parallelism to send produced data to the [storage layer](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer). [#80487][#80487] - -

Miscellaneous

- -

Docker

- -- Refactored the initialization process of the Docker image to accommodate the use case with memory storage. [#80558][#80558] - -

Contributors

- -This release includes 29 merged PRs by 20 authors. - -[#80248]: https://github.com/cockroachdb/cockroach/pull/80248 -[#80281]: https://github.com/cockroachdb/cockroach/pull/80281 -[#80384]: https://github.com/cockroachdb/cockroach/pull/80384 -[#80487]: https://github.com/cockroachdb/cockroach/pull/80487 -[#80558]: https://github.com/cockroachdb/cockroach/pull/80558 -[#80564]: https://github.com/cockroachdb/cockroach/pull/80564 -[#80641]: https://github.com/cockroachdb/cockroach/pull/80641 -[#80670]: https://github.com/cockroachdb/cockroach/pull/80670 -[#80712]: https://github.com/cockroachdb/cockroach/pull/80712 -[7d55af0e6]: https://github.com/cockroachdb/cockroach/commit/7d55af0e6 -[c02b3f015]: https://github.com/cockroachdb/cockroach/commit/c02b3f015 diff --git a/src/current/_includes/releases/v22.1/v22.1.0-rc.1.md b/src/current/_includes/releases/v22.1/v22.1.0-rc.1.md deleted file mode 100644 index 74439b7ac97..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.0-rc.1.md +++ /dev/null @@ -1,18 +0,0 @@ -## v22.1.0-rc.1 - -Release Date: May 9, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Bug fixes

- -- Fixed a very rare case where CockroachDB could incorrectly evaluate queries with an [`ORDER BY`](https://www.cockroachlabs.com/docs/v22.1/order-by) clause when the prefix of ordering was already provided by the index ordering of the scanned table. [#80715][#80715] - -- Fixed a rare crash when encountering a nil-pointer deference in `google.golang.org/grpc/internal/transport.(*Stream).Context(...)`. [#80936][#80936] - -

Contributors

- -This release includes 3 merged PRs by 3 authors. - -[#80715]: https://github.com/cockroachdb/cockroach/pull/80715 -[#80936]: https://github.com/cockroachdb/cockroach/pull/80936 diff --git a/src/current/_includes/releases/v22.1/v22.1.0.md b/src/current/_includes/releases/v22.1/v22.1.0.md deleted file mode 100644 index 435efb9bc93..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.0.md +++ /dev/null @@ -1,153 +0,0 @@ -## v22.1.0 - -Release Date: May 24, 2022 - -With the release of CockroachDB v22.1, we've made a variety of management, performance, security, and compatibility improvements. Check out a [summary of the most significant user-facing changes](#v22-1-0-feature-summary) and then [upgrade to CockroachDB v22.1](https://www.cockroachlabs.com/docs/v22.1/upgrade-cockroach-version). For a release announcement with further focus on key features, see the [v22.1 blog post](https://www.cockroachlabs.com/blog/cockroachdb-22-1-release/). - -We're running a packed [schedule of launch events](https://www.cockroachlabs.com/cockroachdb-22-1-launch/) over the next few weeks, which include two opportunities to win limited-edition swag. Join our [Office Hours session](https://www.cockroachlabs.com/webinars/22-1-release-office-hours/) for all your questions, a [coding livestream](http://twitch.tv/itsaydrian) where we'll play with new features, and a [live talk on building and preparing for scale](https://www.cockroachlabs.com/webinars/scale-happens/). - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

CockroachDB {{ site.data.products.cloud }}

- -- Get a free v22.1 cluster on CockroachDB {{ site.data.products.serverless }}. -- Learn about recent updates to CockroachDB {{ site.data.products.cloud }} in the [CockroachDB {{ site.data.products.cloud }} Release Notes]({% link releases/cloud.md %}). - -

Feature summary

- -This section summarizes the most significant user-facing changes in v22.1.0. For a complete list of features and changes, including bug fixes and performance improvements, see the [release notes]({% link releases/index.md %}#testing-releases) for previous testing releases. You can also search for [what's new in v22.1 in our docs](https://www.cockroachlabs.com/docs/search?query=new+in+v22.1). - -{{site.data.alerts.callout_info}} -"Core" features are freely available in the core version of CockroachDB and do not require an enterprise license. "Enterprise" features require an [enterprise license](https://www.cockroachlabs.com/get-cockroachdb/enterprise/). [CockroachDB {{ site.data.products.cloud }} clusters](https://cockroachlabs.cloud/) include all enterprise features. You can also use [`cockroach demo`](https://www.cockroachlabs.com/docs/v22.1/cockroach-demo) to test enterprise features in a local, temporary cluster. -{{site.data.alerts.end}} - -- [SQL](#v22-1-0-sql) -- [Recovery and I/O](#v22-1-0-recovery-and-i-o) -- [Database operations](#v22-1-0-database-operations) -- [Backward-incompatible changes](#v22-1-0-backward-incompatible-changes) -- [Deprecations](#v22-1-0-deprecations) -- [Known limitations](#v22-1-0-known-limitations) -- [Education](#v22-1-0-education) - - - -

SQL

- - - -| Version | Feature | Description | -----------+---------+-------------- -| Core | Hash-sharded indexes | [Hash-sharded indexes](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) distribute sequential data across multiple nodes within your cluster, eliminating hotspots in certain types of scenarios. This feature is now generally available (GA) after a previous experimental release. | -| Core | Super regions | [Super regions](https://www.cockroachlabs.com/docs/v22.1/add-super-region) allow you to define a set of regions on the database such that any `REGIONAL BY TABLE` table based in the super region or any `REGIONAL BY ROW` partition in the super region will have all their replicas in regions that are also within the super region. Their primary use is for [data domiciling](https://www.cockroachlabs.com/docs/v22.1/data-domiciling). This feature is in preview release. | -| Core | Support for AWS DMS | Support for [AWS Database Migration Service (AWS DMS)](https://www.cockroachlabs.com/docs/v22.1/third-party-database-tools#schema-migration-tools) allows users to migrate data from an existing database to CockroachDB. | -| Core | Admission control | [Admission control](https://www.cockroachlabs.com/docs/v22.1/admission-control) helps maintain cluster performance and availability when some nodes experience high load. This was previously available as a preview release but is now generally available and enabled by default. | -| Core | Set a quality of service (QoS) level for SQL sessions with admission control | In an overload scenario where CockroachDB cannot service all requests, you can identify which requests should be prioritized by setting a _quality of service_ (QoS). [Admission control](https://www.cockroachlabs.com/docs/v22.1/admission-control) queues work throughout the system. You can [set the QoS level](https://www.cockroachlabs.com/docs/v22.1/admission-control#set-quality-of-service-level-for-a-session) on its queues for SQL requests submitted in a session to `background`, `regular`, or `critical`. | -| Core | Rename objects within the transaction that creates them | It is now possible to swap names for tables and other objects within the same transaction that creates them. For example: `CREATE TABLE foo(); BEGIN; ALTER TABLE foo RENAME TO bar; CREATE TABLE foo(); COMMIT;` | -| Core | Drop `ENUM` values using `ALTER TYPE...DROP VALUE` | Drop a specific value from the user-defined type's list of values. The [`ALTER TYPE...DROP VALUE` statement](https://www.cockroachlabs.com/docs/v22.1/alter-type) is now available by default to all instances. It was previously disabled by default, requiring the cluster setting enable_drop_enum_value to enable it. | -| Core | Support the `UNION` variant for recursive CTE | For compatibility with PostgreSQL, `WITH RECURSIVE...UNION` statements are now supported in [recursive common table expressions](https://www.cockroachlabs.com/docs/v22.1/common-table-expressions#recursive-common-table-expressions). | -| Core | Locality optimized search supports `LIMIT` clauses | Queries with a `LIMIT` clause on a single table, either explicitly written or implicit such as in an uncorrelated EXISTS subquery, now [scan that table with improved latency](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer#locality-optimized-search-in-multi-region-clusters) if the table is defined with `LOCALITY REGIONAL BY ROW` and the number of qualified rows residing in the local region does not exceed the hard limit (the sum of the `LIMIT` clause and optional `OFFSET` clause values). This optimization is only applied if the hard limit is 100,000 or less. | -| Core | Surface errors for testing retry logic | To help enable developers test their application's retry logic, they can set the session variable `inject_retry_errors_enabled` so that any statement that is a not a `SET` statement will [return a transaction retry error](https://www.cockroachlabs.com/docs/v22.1/transactions#testing-transaction-retry-logic) if it is run inside of an explicit transaction. | -| Core | Row Level TTL (preview release) | With Time to Live ("TTL") expiration on table rows, also known as [Row-Level TTL](https://www.cockroachlabs.com/docs/v22.1/row-level-ttl), CockroachDB automatically deletes rows once they have been stored longer than their specified expiration time. This avoids the complexities and potential performance impacts of managing expiration at the application level. See the documentation for Limitations that are part of this preview release. | -| Core | `DATE` and `INTERVAL` style settings available by default | The session variables `datestyle_enabled` and `intervalstyle_enabled`, and the cluster settings `sql.defaults.datestyle.enabled` and `sql.defaults.intervalstyle.enabled` no longer have any effect. When the upgrade to v22.1 is finalized, all of these settings are effectively interpreted as `true`, enabling the use of the `intervalstyle` and `datestyle` session and cluster settings. | -| Core | Optimized node draining with `connection_wait` | If you cannot tolerate connection errors during node drain, you can now change the `server.shutdown.connection_wait` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) to allow SQL client connections to gracefully close before CockroachDB forcibly closes them. For guidance, see [Node Shutdown](https://www.cockroachlabs.com/docs/v22.1/node-shutdown#server-shutdown-connection_wait). | -| Core | PostgreSQL wire protocol query cancellation | In addition to the `CANCEL QUERY SQL` statement, developers can now use the [cancellation method specified by the PostgreSQL wire protocol](https://www.cockroachlabs.com/docs/v22.1/cancel-query#considerations). | -| Core | Gateway node connection limits | To control the maximum number of non-superuser ([`root`](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#root-user) user or other [`admin` role](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#admin-role)) connections a [gateway node](https://www.cockroachlabs.com/docs/v22.1/architecture/sql-layer#gateway-node) can have open at one time, use the `server.max_connections_per_gateway` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). If a new non-superuser connection would exceed this limit, an error message and code are returned. | -| Core | Support for `WITH GRANT OPTION` privilege | See [Security](#v22-1-0-security). | -| Core | Transaction contention events | [Transaction contention events](https://www.cockroachlabs.com/docs/v22.1/crdb-internal#transaction_contention_events) enable you to determine where contention is occurring in real-time for affected active statements, and historically for past statements. | -| Core | Index recommendations | [Index recommendations](https://www.cockroachlabs.com/docs/v22.1/explain#default-statement-plans) indicate when your query would benefit from an index and provide a suggested statement to create the index. | - -

Developer Experience

- -| Version | Feature | Description | -----------+---------+-------------- -| Core | Support for Prisma | CockroachDB now supports the [Prisma ORM](https://www.prisma.io/blog/prisma-preview-cockroach-db-release). A new [tutorial and example app](https://www.cockroachlabs.com/docs/v22.1/build-a-nodejs-app-with-cockroachdb-prisma) are available. | -| Core | Lightweight `cockroach-sql` executable | A new client-only SQL shell for users who do not operate the cluster themselves. | - -

Recovery and I/O

- -| Version | Feature | Description | -----------+---------+-------------- -| Enterprise | Alter changefeeds | The new SQL statement [ALTER CHANGEFEED](https://www.cockroachlabs.com/docs/v22.1/alter-changefeed) enables users to modify active changefeeds, preventing the need to start a new changefeed. | -| Enterprise | Track metrics per changefeed | Create [labels for capturing a metric](https://www.cockroachlabs.com/docs/v22.1/monitor-and-debug-changefeeds#using-changefeed-metrics-labels) across one or more specified changefeeds. This is an experimental feature that you can enable using a cluster setting. | -| Core | Changefeed support for tables with multiple column families | Changefeeds can now target [tables with more than one column family](https://www.cockroachlabs.com/docs/v22.1/changefeeds-on-tables-with-column-families) using either the `split_column_families` option or the `FAMILY` keyword. Changefeeds will emit individual messages per column family on a table. | -| Enterprise | Stream data to Google Cloud Pub/Sub | Changefeeds can now [stream data to a Pub/Sub sink](https://www.cockroachlabs.com/docs/v22.1/changefeed-examples#create-a-changefeed-connected-to-a-google-cloud-pub-sub-sink). | -| Core | Export to the Apache Parquet format | Using a `SQL EXPORT `statement, users can now choose to [export data to the Parquet format](https://www.cockroachlabs.com/docs/v22.1/export). | -| Core | Backup encryption enhancements | See [Security](#v22-1-0-security). | -| Core | Select an S3 storage class for backups | [Associate your backup objects with a specific storage class](https://www.cockroachlabs.com/docs/v22.1/backup#back-up-with-an-s3-storage-class) in your Amazon S3 bucket. | -| Core | Exclude a table's data from backups | [Exclude a table's row data from a backup](https://www.cockroachlabs.com/docs/v22.1/create-table#create-a-table-with-data-excluded-from-backup). This may be useful for tables with high-churn data that you would like to garbage collect more quickly than the incremental backup schedule. | -| Core | Store incremental backups in custom locations | Specify a different [storage location for incremental backups](https://www.cockroachlabs.com/docs/v22.1/backup#create-incremental-backups) using the new BACKUP option `incremental_location`. This makes it easier to retain full backups longer than incremental backups, as is often required for compliance reasons. | -| Core | Rename database on restore | An optional `new_db_name` clause on [`RESTORE DATABASE`](https://www.cockroachlabs.com/docs/v22.1/restore#databases) statements allows the user to rename the database they intend to restore. This can be helpful in disaster recovery scenarios when restoring to a temporary state. | - -

Database operations

- -| Version | Feature | Description | -----------+---------+-------------- -| Core | DB Console access from a specified node | On the Advanced Debug page, DB Console access can be [routed from the currently accessed node](https://www.cockroachlabs.com/docs/v22.1/ui-debug-pages#license-and-node-information) to a specific node on the cluster. | -| Core | Alerting rules | Every CockroachDB node exports an [alerting rules template](https://www.cockroachlabs.com/docs/v22.1/monitoring-and-alerting#prometheus-alerting-rules-endpoint) at `http://:/api/v2/rules/`. These rule definitions are formatted for easy integration with Prometheus' Alertmanager. | -| Core | `NOSQLLOGIN` role option | The `NOSQLLOGIN` [role option](https://www.cockroachlabs.com/docs/v22.1/create-role#role-options) grants a user access to the DB Console without also granting SQL shell access. | -| Core | Hot ranges observability | The [Hot Ranges page](https://www.cockroachlabs.com/docs/v22.1/ui-hot-ranges-page) of the DB Console provides details about ranges receiving a high number of reads or writes. | -| Core | Per-replica circuit breakers | When individual ranges become temporarily unavailable, requests to those ranges are refused by a [per-replica "circuit breaker" mechanism](https://www.cockroachlabs.com/docs/v22.1/architecture/replication-layer#per-replica-circuit-breakers) instead of hanging indefinitely. | - -

Security

- -| Version | Feature | Description | -----------+---------+-------------- -| Core | Support of Google Cloud KMS for encrypted backups | Google Cloud KMS is now supported as a key management system for [encrypted BACKUP and RESTORE operations](https://www.cockroachlabs.com/docs/v22.1/take-and-restore-encrypted-backups). | -| Enterprise | Rotate backup encryption keys | Keep your backups secure by rotating the AWS or Google Cloud KMS keys you use to encrypt your backups and adding them to an existing key chain using the new [ALTER BACKUP](https://www.cockroachlabs.com/docs/v22.1/alter-backup) statement. | -| Core | Support for `WITH GRANT OPTION` privilege | Users granted a privilege with [`WITH GRANT OPTION`](https://www.cockroachlabs.com/docs/v22.1/grant) can in turn grant that privilege to others. The owner of an object implicitly has the `GRANT OPTION` for all privileges, and the `GRANT OPTION` is inherited through role memberships. This matches functionality offered in PostgreSQL. | -| Core | Support client-provided password hashes for credential definitions | CockroachDB now [recognizes pre-computed password hashes](https://www.cockroachlabs.com/docs/v22.1/security-reference/scram-authentication#server-user_login-store_client_pre_hashed_passwords-enabled) when presented to the regular `PASSWORD` option when creating or updating a role. | -| Core | Support SCRAM-SHA-256 SASL authentication method | CockroachDB is now able to [authenticate users](https://www.cockroachlabs.com/docs/v22.1/security-reference/authentication) via the DB Console and SQL sessions when the client provides a cleartext password and the stored credentials are encoded [using the SCRAM-SHA-256 algorithm](https://www.cockroachlabs.com/docs/v22.1/security-reference/scram-authentication). For SQL client sessions, authentication methods `password` (cleartext passwords) and `cert-password` (TLS client cert or cleartext password) with either CRDB-BCRYPT or SCRAM-SHA-256 stored credentials can now be used. Previously, only CRDB-BCRYPT stored credentials were supported for cleartext password authentication. | -| Core | Support HSTS headers to enforce HTTPS | Clusters can now be configured to send HSTS headers with HTTP requests in order to enable browser-level enforcement of HTTPS for the cluster host. Once the headers are present, after an initial request, browsers will force HTTPS on all subsequent connections to the host. This reduces the possibility of MitM attacks, to which HTTP-to-HTTPS redirects are vulnerable. | - -

Backward-incompatible changes

- -Before [upgrading to CockroachDB v22.1](https://www.cockroachlabs.com/docs/v22.1/upgrade-cockroach-version), be sure to review the following backward-incompatible changes and adjust your deployment as necessary. - -- Using [`SESSION_USER`](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#special-syntax-forms) in a projection or `WHERE` clause now returns the `SESSION_USER` instead of the `CURRENT_USER`. For backward compatibility, use [`session_user()`](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#system-info-functions) for `SESSION_USER` and `current_user()` for `CURRENT_USER`. [#70444][#70444] -- Placeholder values (e.g., `$1`) can no longer be used for role names in [`ALTER ROLE`](https://www.cockroachlabs.com/docs/v22.1/alter-role) statements or for role names in [`CREATE ROLE`](https://www.cockroachlabs.com/docs/v22.1/create-role)/[`DROP ROLE`](https://www.cockroachlabs.com/docs/v22.1/drop-role) statements. [#71498][#71498] -- Support has been removed for: - - `IMPORT TABLE ... CREATE USING` - - `IMPORT TABLE ... DATA` - refers to CSV, Delimited, PGCOPY, or AVRO. These are formats that do not define the table schema in the same file as the data. The workaround following this change is to use `CREATE TABLE` with the same schema that was previously being passed into the IMPORT statement, followed by an `IMPORT INTO` the newly created table. -- Non-standard [`cron`](https://wikipedia.org/wiki/Cron) expressions that specify seconds or year fields are no longer supported. [#74881][#74881] -- [Changefeeds](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) will now filter out [virtual computed columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns) from events by default. [#74916][#74916] -- The [environment variable](https://www.cockroachlabs.com/docs/v22.1/cockroach-commands#environment-variables) that controls the max amount of CPU that can be taken by password hash computations during authentication was renamed from `COCKROACH_MAX_BCRYPT_CONCURRENCY` to `COCKROACH_MAX_PW_HASH_COMPUTE_CONCURRENCY`. Its semantics remain unchanged. [#74301][#74301] -- The volatility of cast operations between [strings](https://www.cockroachlabs.com/docs/v22.1/string) and [intervals](https://www.cockroachlabs.com/docs/v22.1/interval) or [timestamps](https://www.cockroachlabs.com/docs/v22.1/timestamp) has changed from immutable to stable. This means that these cast operations can no longer be used in computed columns or partial index definitions. Instead, use the following [built-in functions:](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) `parse_interval`, `parse_date`, `parse_time`, `parse_timetz`, `parse_timestamp`, or `to_char`. Upon upgrade to v22.1, CockroachDB will automatically rewrite any computed columns or partial indexes that use the affected casts to use the new built-in functions. [#78455][#78455] -- Users can no longer define the subdirectory of their full backup. This deprecated syntax can be enabled by changing the new `bulkio.backup.deprecated_full_backup_with_subdir` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) to `true`. [#80145][#80145] - -

Deprecations

- -- Using the [`cockroach node drain`](https://www.cockroachlabs.com/docs/v22.1/cockroach-node) command without specifying a node ID is deprecated. [#73991][#73991] -- The flag `--self` of the [`cockroach node decommission` command](https://www.cockroachlabs.com/docs/v22.1/cockroach-node) is deprecated. Instead, operators should specify the node ID of the target node as an explicit argument. The node that the command is connected to should not be a target node. [#74319][#74319] -- The `experimental_enable_hash_sharded_indexes` session variable is deprecated as hash-sharded indexes are enabled by default. Enabling this setting results in a no-op. [#78038][#78038] -- The [`BACKUP TO`](https://www.cockroachlabs.com/docs/v22.1/) syntax to take backups is deprecated, and will be removed in a future release. Create a backup collection using the `BACKUP INTO` syntax. [#78250][#78250] -- Users can no longer define the subdirectory of their full backup. This deprecated syntax can be enabled by changing the new `bulkio.backup.deprecated_full_backup_with_subdir` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) to `true`. [#80145][#80145] -- `SHOW BACKUP` without the `IN` keyword to specify a subdirectory is deprecated and will be removed from a future release. Users are recommended to only create collection based backups and view them with `SHOW BACKUP FROM IN `. [#79116][#79116] -- Using the [`RESTORE FROM`](https://www.cockroachlabs.com/docs/v22.1/restore) syntax without an explicit subdirectory pointing to a backup in a collection is deprecated, and will be removed in a future release. Use `RESTORE FROM IN ` to restore a particular backup in a collection. [#78250][#78250] - -

Known limitations

- -For information about new and unresolved limitations in CockroachDB v22.1, with suggested workarounds where applicable, see [Known Limitations](https://www.cockroachlabs.com/docs/v22.1/known-limitations). - -

Education

- -| Area | Topic | Description | ----------------------+---------------------------+------------ -| Cockroach University | New Java Course | [Event-Driven Architecture for Java Developers](https://university.cockroachlabs.com/courses/course-v1:crl+event-driven-architecture-for-java-devs+self-paced/about) teaches you how to handle message queues by building the [transactional outbox pattern](https://www.cockroachlabs.com/blog/message-queuing-database-kafka/) into your application using CockroachDB's built-in Change Data Capture feature. | -| Cockroach University | New SQL for Application Developers Courses | The new SQL for Application Developers skill path helps developers new to SQL learn how to model their application object relationships in a relational database and use transactions. Its first two courses, now available as a limited preview, are [Getting Started With SQL for Application Developers](https://university.cockroachlabs.com/courses/course-v1:crl+getting-started-with-sql+preview/about) and [Modeling Object Relationships in SQL](https://university.cockroachlabs.com/courses/course-v1:crl+modeling-object-relationships-in-sql+preview/about). | -| Docs | CockroachDB Cloud Guidance | New docs on how to use the [Cloud API](https://www.cockroachlabs.com/docs/cockroachcloud/cloud-api) to programmatically manage the lifecycle of clusters within your organization, how to use the [`ccloud` command](https://www.cockroachlabs.com/docs/cockroachcloud/ccloud-get-started) to create, manage, and connect to CockroachDB Cloud clusters, and how to do performance benchmarking with a CockroachDB {{ site.data.products.serverless }} cluster. | -| Docs | Improved SQL Guidance | New documentation on transaction guardrails via [limiting the number of rows written or read in a transaction](https://www.cockroachlabs.com/docs/v22.1/transactions#limit-the-number-of-rows-written-or-read-in-a-transaction) and improved content on the use of indexes [in performance recipes](https://www.cockroachlabs.com/docs/v22.1/performance-recipes)and [secondary indexes](https://www.cockroachlabs.com/docs/v22.1/schema-design-indexes). | -| Docs | New ORM tutorials and sample apps for CockroachDB {{ site.data.products.serverless }} | Tutorials for [AWS Lambda](https://www.cockroachlabs.com/docs/v22.1/deploy-lambda-function), [Knex.JS](https://www.cockroachlabs.com/docs/v22.1/build-a-nodejs-app-with-cockroachdb-knexjs), [Prisma](https://www.cockroachlabs.com/docs/v22.1/build-a-nodejs-app-with-cockroachdb-prisma), [Netlify](https://www.cockroachlabs.com/docs/v22.1/deploy-app-netlify), and [Vercel](https://www.cockroachlabs.com/docs/v22.1/deploy-app-vercel). | -| Docs | Additional developer resources | Best practices for [serverless functions](https://www.cockroachlabs.com/docs/v22.1/serverless-function-best-practices) and [testing/CI environments](https://www.cockroachlabs.com/docs/v22.1/local-testing), and a new [client connection reference](https://www.cockroachlabs.com/docs/v22.1/connect-to-the-database) page with CockroachDB {{ site.data.products.serverless }}, Dedicated, and Self-Hosted connection strings for fully-supported drivers/ORMs. | -| Docs | Security doc improvements | We have restructured and improved the Security section, including [supported authentication methods](https://www.cockroachlabs.com/docs/v22.1/security-reference/authentication#currently-supported-authentication-methods). | -| Docs | Content overhauls | [Stream Data (Changefeeds)](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) and [Performance](https://www.cockroachlabs.com/docs/v22.1/make-queries-fast) docs have also been restructured and improved. | -| Docs | Improved release notes | Release notes (_What's New?_ pages) are now compiled to one page per major version. | -| Docs | New Glossary | The new [Glossary](https://www.cockroachlabs.com/docs/v22.1/architecture/glossary) page under the Get Started section of the docs compiles two existing glossaries and includes additional definitions for terms commonly found within the docs. | -| Docs | New Nav | The new navigation menu structure for the docs better classifies types of user tasks. | diff --git a/src/current/_includes/releases/v22.1/v22.1.1.md b/src/current/_includes/releases/v22.1/v22.1.1.md deleted file mode 100644 index 2f5cd27ada0..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.1.md +++ /dev/null @@ -1,194 +0,0 @@ -## v22.1.1 - -Release Date: June 6, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Security updates

- -- The `crdb_internal.reset_sql_stats()` and `crdb_internal.reset_index_usage_stats()` [built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) now check whether the user has the [`admin` role](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#admin-role). [#80278][#80278] - -

General changes

- -- When using Azure Storage for data operations, CockroachDB now calculates the storage account URL from the provided `AZURE_ENVIRONMENT` query parameter. If not specified, this defaults to `AzurePublicCloud` to maintain backward compatibility. This parameter should **not** be used when the cluster is in a mixed-version or upgrading state, as nodes that have not been upgraded will continue to send requests to `AzurePublicCloud` even in the presence of this parameter. [#80801][#80801] - -

Enterprise edition changes

- -- Previously, backups in the base directory of a Google Cloud Storage bucket would not be discovered by [`SHOW BACKUPS`](https://www.cockroachlabs.com/docs/v22.1/show-backup). These backups will now appear correctly. [#80493][#80493] -- [Changefeeds](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) to Google Cloud Platform no longer require topic creation permission if all topics being written to already exist. [#81684][#81684] - -

SQL language changes

- -- `ttl_job_cron` is now displayed on [`SHOW CREATE TABLE`](https://www.cockroachlabs.com/docs/v22.1/show-create) and the specified `reloptions` by default. [#80292][#80292] -- Added the `crdb_internal.cluster_locks` virtual table, which exposes the current state of locks on keys tracked by concurrency control. The virtual table displays metadata on locks currently held by transactions, as well as operations waiting to obtain the locks, and as such can be used to visualize active contention. The `VIEWACTIVITY` or `VIEWACTIVITYREDACTED` [role option](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#role-options), or the [`admin` role](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#admin-role), is required to access the virtual table; however, if the user only has the `VIEWACTIVITYREDACTED` role option, the key on which a lock is held will be redacted. [#80517][#80517] -- [`BACKUP`](https://www.cockroachlabs.com/docs/v22.1/backup), [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import), and [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore) jobs will be paused instead of entering a failed state if they continue to encounter transient errors once they have retried a maximum number of times. The user is responsible for cancelling or resuming the job from this state. [#80434][#80434] -- Added a `sql.conn.failures` counter metric that shows the number of failed SQL connections. [#80987][#80987] -- Constraints that only include hidden columns are no longer excluded in [`SHOW CONSTRAINTS`](https://www.cockroachlabs.com/docs/v22.1/show-constraints). This behavior can be changed using the `show_primary_key_constraint_on_not_visible_columns` session variable. [#80637][#80637] -- Added a `sql.txn.contended.count` metric that exposes the total number of transactions that experienced [contentions](https://www.cockroachlabs.com/docs/v22.1/transactions#transaction-contention). [#81070][#81070] -- Automatic statistics collection can now be [enabled or disabled for individual tables](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer#enable-and-disable-automatic-statistics-collection-for-tables), taking precedence over the `sql.stats.automatic_collection.enabled`, `sql.stats.automatic_collection.fraction_stale_rows`, or `sql.stats.automatic_collection.min_stale_rows` [cluster settings](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). The table settings may be set at table creation time, or later via [`ALTER TABLE ... SET`](https://www.cockroachlabs.com/docs/v22.1/alter-table). Note that any row mutations which occurred a minute or two before disabling automatic statistics collection via `ALTER TABLE ... SET` may trigger statistics collection, but DML statements submitted after the setting change will not. [#81019][#81019] -- Added a new session variable, `enable_multiple_modifications_of_table`, which can be used instead of the cluster variable `sql.multiple_modifications_of_table.enabled` to allow statements containing multiple [`INSERT ON CONFLICT`](https://www.cockroachlabs.com/docs/v22.1/insert), [`UPSERT`](https://www.cockroachlabs.com/docs/v22.1/upsert), [`UPDATE`](https://www.cockroachlabs.com/docs/v22.1/update), or [`DELETE`](https://www.cockroachlabs.com/docs/v22.1/delete) subqueries modifying the same table. As with `sql.multiple_modifications_of_table.enabled`, be warned that with this session variable enabled, there is nothing to prevent the table corruption seen in issue [#70731](https://github.com/cockroachdb/cockroach/issues/70731) from occuring if the same row is modified multiple times by different subqueries of a single statment. It is best to rewrite these statements, but the session variable is provided as an aid if this is not possible. [#79930][#79930] -- Fixed a small typo when using `DateStyle` and `IntervalStyle`. [#81550][#81550] -- Added an `is_grantable` column to [`SHOW GRANTS FOR {role}`](https://www.cockroachlabs.com/docs/v22.1/show-grants) for consistency with other `SHOW GRANTS` commands. [#81820][#81820] -- Improved query performance for `crdb_internal.cluster_locks` when issued with constraints in the `WHERE` clause on `table_id`, `database_name`, or `table_name` columns. [#81261][#81261] - -

Operational changes

- -- The default value for `storage.max_sync_duration` has been lowered from `60s` to `20s`. CockroachDB will now exit sooner with a fatal error if a single slow disk operation exceeds this value. [#81496][#81496] -- The [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v22.1/cockroach-debug-zip) and [`cockroach debug merge-logs`](https://www.cockroachlabs.com/docs/v22.1/cockroach-debug-merge-logs) commands will now work with [JSON-formatted logs](https://www.cockroachlabs.com/docs/v22.1/log-formats#format-json). [#81469][#81469] - -

Command-line changes

- -- The standalone SQL shell executable `cockroach-sql` can now be installed (renamed/symlinked) as `cockroach`, and invoked via `cockroach sql`. For example, the following commands are all equivalent: `cockroach-sql -f foo.sql`, `cockroach-sql sql -f foo.sql`; and after running `ln -s cockroach-sql cockroach`, `cockroach sql -f foo.sql`. [#80930][#80930] -- Added a new flag `--advertise-http-addr`, which explicitly sets the HTTP advertise address that is used to display the URL for [DB Console access](https://www.cockroachlabs.com/docs/v22.1/ui-overview#db-console-access) and for proxying HTTP connections between nodes as described in [#73285](https://github.com/cockroachdb/cockroach/issues/73285). It may be necessary to set `--advertise-http-addr` in order for these features to work correctly in some deployments. Previously, the HTTP advertise address was derived from the OS hostname, the `--advertise-addr`, and the `--http-addr` flags, in that order. The new logic will override the HTTP advertise host with the host from `--advertise-addr` first if set, and then the host from `--http-addr`. The port will **never** be inherited from `--advertise-host` and will only be inherited from `--http-addr`, which is `8080` by default. [#81316][#81316] -- If [node decommissioning](https://www.cockroachlabs.com/docs/v22.1/node-shutdown?filters=decommission) is slow or stalls, the descriptions of some "stuck" replicas are now printed to the operator. [#79516][#79516] -- [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v22.1/cockroach-debug-zip) now includes system tables using a denylist instead of an allowlist. [#81383][#81383] - -

DB Console changes

- -- Added more job types to the **Type** filter on the [Jobs page](https://www.cockroachlabs.com/docs/v22.1/ui-jobs-page#filter-jobs). [#80128][#80128] -- Added a dropdown filter on the [Node Diagnostics page](https://www.cockroachlabs.com/docs/v22.1/ui-debug-pages#even-more-advanced-debugging) to view by **Active**, **Decomissioned**, or **All** nodes. [#80320][#80320] -- The custom selection in the time picker on the Metrics dashboards, [SQL Activity page](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page), and other DB Console pages now defaults to the currently selected time. [#80794][#80794] -- Updated all dates to use 24h format in UTC. [#81747][#81747] -- Fixed the size of the table area on the [Statements](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) pages to prevent cutting off the columns selector and filters. [#81746][#81746] -- The [Job status](https://www.cockroachlabs.com/docs/v22.1/ui-jobs-page#job-status) on the Jobs page of the DB Console will now show a status column for [changefeed](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) jobs and display the `highwater_timestamp` value in a separate column. Thise more closely matches the SQL output of [`SHOW CHANGEFEED JOBS`](https://www.cockroachlabs.com/docs/v22.1/show-jobs). The highwater timestamp now displays as the nanosecond system time value by default, with the human-readable value in the tooltip, since the decimal value is copy/pasted more often. [#81757][#81757] -- The tooltip for a Session's status on the [Sessions page](https://www.cockroachlabs.com/docs/v22.1/ui-sessions-page) has been updated with a more explicit definition: `A session is Active if it has an open explicit or implicit transaction (individual SQL statement) with a statement that is actively running or waiting to acquire a lock. A session is Idle if it is not executing a statement.` [#81904][#81904] - -

Bug fixes

- -- Previously, CockroachDB could lose the `INT2VECTOR` and `OIDVECTOR` type of some arrays. This is now fixed. [#78581][#78581] -- Previously, CockroachDB could encounter an internal error when evaluating queries with `OFFSET` and `LIMIT` clauses when the addition of the `offset` and the `limit` value would be larger than `int64` range. This is now fixed. [#79878][#79878] -- Previously, a custom time-series metric `sql.distsql.queries.spilled` was computed incorrectly, leading to an exaggerated number. This is now fixed. [#79882][#79882] -- Fixed a bug where `NaN` coordinates when using `ST_Intersects`/`ST_Within`/`ST_Covers` would return `true` instead of `false` for point-in-polygon operations. [#80202][#80202] -- Added a detailed error message for index out of bounds when decoding a binary tuple datum. This does not fix the root cause, but should give more insight into what is happening. [#79933][#79933] -- Fixed a bug where `ST_MinimumBoundingCircle` would panic with infinite coordinates and a `num_segments` argument. [#80347][#80347] -- Addressed an issue where automatic encryption-at-rest data key rotation would be disabled after a node restart without a store key rotation. [#80563][#80563] -- Fixed the formatting/printing behavior for [`ALTER DEFAULT PRIVILEGES`](https://www.cockroachlabs.com/docs/v22.1/alter-default-privileges), which will correct some mistaken error messages. [#80327][#80327] -- Fixed a bug whereby the cluster version could regress due to a race condition. [#80711][#80711] -- Fixed a rare crash which could occur when restarting a node after dropping tables. [#80572][#80572] -- Previously, in very rare circumstances, CockroachDB could incorrectly evaluate queries with an `ORDER BY` clause when the prefix of ordering was already provided by the index ordering of the scanned table. [#80714][#80714] -- Index recommendations are no longer presented for system tables in the output of [`EXPLAIN`](https://www.cockroachlabs.com/docs/v22.1/explain) statements. [#80952][#80952] -- Fixed a goroutine leak when internal rangefeed clients received certain kinds of retriable errors. [#80798][#80798] -- Fixed a bug that allowed duplicate constraint names for the same table if the constraints were on hidden columns. [#80637][#80637] -- Errors encountered when sending rebalancing hints to the [storage layer](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer) during [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import)s and index creation are now only logged, and no longer cause the job to fail. [#80469][#80469] -- Fixed a bug where if a transaction's commit time is pushed forward from its initial provisional time, an enclosing [`CREATE MATERIALIZED VIEW AS ...`](https://www.cockroachlabs.com/docs/v22.1/create-view) might fail to find other descriptors created in the same transaction during the view's backfill stage. The detailed descriptor of this bug is summarized in issue [#79015](https://github.com/cockroachdb/cockroach/issues/79015). [#80908][#80908] -- Contention statistics are now collected for SQL statistics when tracing is enabled. [#81070][#81070] -- Fixed a bug in [row-level TTL](https://www.cockroachlabs.com/docs/v22.1/row-level-ttl) where the last range key of a table may overlap with a separate table or index, resulting in an `error decoding X bytes` error message when performing row-level TTL. [#81262][#81262] -- Fixed a bug where `format_type` on the `void` type resulted in an error. [#81323][#81323] -- Fixed a bug in which some prepared statements could result in incorrect results when executed. This could occur when the prepared statement included an equality comparison between an index column and a placeholder, and the placholder was cast to a type that was different from the column type. For example, if column a was of type `DECIMAL`, the following prepared query could produce incorrect results when executed: `SELECT * FROM t_dec WHERE a = $1::INT8;` [#81345][#81345] -- Fixed a bug where `ST_MinimumBoundingCircle` with `NaN` coordinates could panic. [#81462][#81462] -- Fixed a panic that was caused by setting the `tracing` session variable using [`SET LOCAL`](https://www.cockroachlabs.com/docs/v22.1/set-vars) or [`ALTER ROLE ... SET`](https://www.cockroachlabs.com/docs/v22.1/alter-role). [#81505][#81505] -- Fixed a bug where [`GRANT ALL TABLES IN SCHEMA`](https://www.cockroachlabs.com/docs/v22.1/grant) would not resolve the correct database name if it was explicitly specified. [#81553][#81553] -- Previously, cancelling `COPY` commands would show an `XXUUU` error, instead of `57014`. This is now fixed. [#81595][#81595] -- Fixed a bug that caused errors with the message `unable to vectorize execution plan: unhandled expression type` in rare cases. This bug had been present since v21.2.0. [#81591][#81591] -- Fixed a bug where [changefeeds](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) could fail permanently if encountering an error while planning their distribution, even though such errors are usually transient. [#81685][#81685] -- Fixed a gap in disk-stall detection. Previously, disk stalls during filesystem metadata operations could go undetected, inducing deadlocks. Now stalls during these types of operations will correctly fatal the process. [#81752][#81752] -- Fixed an issue where the `encryptionStatus` field on the [**Stores** debug page](https://www.cockroachlabs.com/docs/v22.1/ui-debug-pages) of the DB Console would display an error instead of displaying encryption details when encryption-at-rest is enabled. [#81500][#81500] -- In v21.1, a bug was introduced whereby default values were recomputed when populating data in new secondary indexes for columns which were added in the same transaction as the index. This would arise, for example, in cases like `ALTER TABLE t ADD COLUMN f FLOAT8 UNIQUE DEFAULT (random())`. If the default expression was not volatile, then the recomputation was harmless. If, however, the default expression was volatile, the data in the secondary index would not match the data in the primary index: a corrupt index would have been created. This bug has now been fixed. [#81549][#81549] -- Previously, when running [`ALTER DEFAULT PRIVILEGES IN SCHEMA {virtual schema}`](https://www.cockroachlabs.com/docs/v22.1/alter-schema), a panic occured. This now returns the error message `{virtual schema} is not a physical schema`. [#81704][#81704] -- Previously, CockroachDB would encounter an internal error when executing queries with `lead` or `lag` window functions when the default argument had a different type than the first argument. This is now fixed. [#81756][#81756] -- Fixed an issue where a left lookup join could have incorrect results. In particular, some output rows could have non-`NULL` values for right-side columns when the right-side columns should have been `NULL`. This issue only existed in v22.1.0 and prior development releases of v22.1. [#82076][#82076] -- The [Statements](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) pages no longer crash when a search term includes `*`. [#82085][#82085] -- The special characters `*` and `^` are no longer highlighted when searching on the [Statements](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) pages. [#82085][#82085] -- Previously, if materialized view creation failed during the backfill stage, CockroachDB would properly clean up the view but not any of the back references. Back and forward references for materialized views are now cleaned up. [#82099][#82099] -- Fix a bug where `\copy` in the CLI would panic. [#82197][#82197] -- Fixed a bug introduced in v21.2 where the `sql-stats-compaction` job had a chance of not being scheduled during an upgrade from v21.1 to v21.2, causing persisted statement and transaction statistics to be enabled without memory accounting. [#82283][#82283] -- Fixed an edge case where `VALUES` clauses with nested tuples could fail to be type-checked properly in rare cases. [#82298][#82298] -- The [`CREATE SEQUENCE ... AS`](https://www.cockroachlabs.com/docs/v22.1/create-sequence) statement now returns a valid error message when the specified type name does not exist. [#82322][#82322] -- The [`SHOW STATISTICS`](https://www.cockroachlabs.com/docs/v22.1/show-statistics) output no longer displays statistics involving dropped columns. [#82315][#82315] -- Fixed a bug where [changefeeds](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) created before upgrading to v22.1 would silently fail to emit any data other than resolved timestamps. [#82312][#82312] -- Fixed a rare crash indicating a nil-pointer deference in `google.golang.org/grpc/internal/transport.(*Stream).Context(...)`. [#80911][#80911] - -

Performance improvements

- -- Bulk ingestion of unsorted data during [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import) and schema changes uses a higher level of parallelism to send produced data to the storage layer. [#80386][#80386] - -

Docker

- -- Refactored the initialization process of the Docker image to accomodate initialization scripts with memory storage. [#80355][#80355] - -
- -

Contributors

- -This release includes 183 merged PRs by 55 authors. -We would like to thank the following contributors from the CockroachDB community: - -- Nathan Lowe (first-time contributor) - -
- -[#78581]: https://github.com/cockroachdb/cockroach/pull/78581 -[#78636]: https://github.com/cockroachdb/cockroach/pull/78636 -[#79516]: https://github.com/cockroachdb/cockroach/pull/79516 -[#79878]: https://github.com/cockroachdb/cockroach/pull/79878 -[#79882]: https://github.com/cockroachdb/cockroach/pull/79882 -[#79930]: https://github.com/cockroachdb/cockroach/pull/79930 -[#79933]: https://github.com/cockroachdb/cockroach/pull/79933 -[#80128]: https://github.com/cockroachdb/cockroach/pull/80128 -[#80202]: https://github.com/cockroachdb/cockroach/pull/80202 -[#80278]: https://github.com/cockroachdb/cockroach/pull/80278 -[#80292]: https://github.com/cockroachdb/cockroach/pull/80292 -[#80320]: https://github.com/cockroachdb/cockroach/pull/80320 -[#80327]: https://github.com/cockroachdb/cockroach/pull/80327 -[#80345]: https://github.com/cockroachdb/cockroach/pull/80345 -[#80347]: https://github.com/cockroachdb/cockroach/pull/80347 -[#80355]: https://github.com/cockroachdb/cockroach/pull/80355 -[#80386]: https://github.com/cockroachdb/cockroach/pull/80386 -[#80434]: https://github.com/cockroachdb/cockroach/pull/80434 -[#80469]: https://github.com/cockroachdb/cockroach/pull/80469 -[#80493]: https://github.com/cockroachdb/cockroach/pull/80493 -[#80517]: https://github.com/cockroachdb/cockroach/pull/80517 -[#80563]: https://github.com/cockroachdb/cockroach/pull/80563 -[#80572]: https://github.com/cockroachdb/cockroach/pull/80572 -[#80637]: https://github.com/cockroachdb/cockroach/pull/80637 -[#80711]: https://github.com/cockroachdb/cockroach/pull/80711 -[#80714]: https://github.com/cockroachdb/cockroach/pull/80714 -[#80718]: https://github.com/cockroachdb/cockroach/pull/80718 -[#80794]: https://github.com/cockroachdb/cockroach/pull/80794 -[#80798]: https://github.com/cockroachdb/cockroach/pull/80798 -[#80801]: https://github.com/cockroachdb/cockroach/pull/80801 -[#80908]: https://github.com/cockroachdb/cockroach/pull/80908 -[#80911]: https://github.com/cockroachdb/cockroach/pull/80911 -[#80930]: https://github.com/cockroachdb/cockroach/pull/80930 -[#80952]: https://github.com/cockroachdb/cockroach/pull/80952 -[#80987]: https://github.com/cockroachdb/cockroach/pull/80987 -[#81019]: https://github.com/cockroachdb/cockroach/pull/81019 -[#81070]: https://github.com/cockroachdb/cockroach/pull/81070 -[#81261]: https://github.com/cockroachdb/cockroach/pull/81261 -[#81262]: https://github.com/cockroachdb/cockroach/pull/81262 -[#81316]: https://github.com/cockroachdb/cockroach/pull/81316 -[#81323]: https://github.com/cockroachdb/cockroach/pull/81323 -[#81345]: https://github.com/cockroachdb/cockroach/pull/81345 -[#81383]: https://github.com/cockroachdb/cockroach/pull/81383 -[#81462]: https://github.com/cockroachdb/cockroach/pull/81462 -[#81469]: https://github.com/cockroachdb/cockroach/pull/81469 -[#81496]: https://github.com/cockroachdb/cockroach/pull/81496 -[#81500]: https://github.com/cockroachdb/cockroach/pull/81500 -[#81505]: https://github.com/cockroachdb/cockroach/pull/81505 -[#81549]: https://github.com/cockroachdb/cockroach/pull/81549 -[#81550]: https://github.com/cockroachdb/cockroach/pull/81550 -[#81553]: https://github.com/cockroachdb/cockroach/pull/81553 -[#81591]: https://github.com/cockroachdb/cockroach/pull/81591 -[#81595]: https://github.com/cockroachdb/cockroach/pull/81595 -[#81684]: https://github.com/cockroachdb/cockroach/pull/81684 -[#81685]: https://github.com/cockroachdb/cockroach/pull/81685 -[#81704]: https://github.com/cockroachdb/cockroach/pull/81704 -[#81746]: https://github.com/cockroachdb/cockroach/pull/81746 -[#81747]: https://github.com/cockroachdb/cockroach/pull/81747 -[#81752]: https://github.com/cockroachdb/cockroach/pull/81752 -[#81756]: https://github.com/cockroachdb/cockroach/pull/81756 -[#81757]: https://github.com/cockroachdb/cockroach/pull/81757 -[#81820]: https://github.com/cockroachdb/cockroach/pull/81820 -[#81904]: https://github.com/cockroachdb/cockroach/pull/81904 -[#81919]: https://github.com/cockroachdb/cockroach/pull/81919 -[#82076]: https://github.com/cockroachdb/cockroach/pull/82076 -[#82085]: https://github.com/cockroachdb/cockroach/pull/82085 -[#82099]: https://github.com/cockroachdb/cockroach/pull/82099 -[#82197]: https://github.com/cockroachdb/cockroach/pull/82197 -[#82283]: https://github.com/cockroachdb/cockroach/pull/82283 -[#82298]: https://github.com/cockroachdb/cockroach/pull/82298 -[#82312]: https://github.com/cockroachdb/cockroach/pull/82312 -[#82315]: https://github.com/cockroachdb/cockroach/pull/82315 -[#82322]: https://github.com/cockroachdb/cockroach/pull/82322 -[01c751566]: https://github.com/cockroachdb/cockroach/commit/01c751566 -[e23f28a97]: https://github.com/cockroachdb/cockroach/commit/e23f28a97 diff --git a/src/current/_includes/releases/v22.1/v22.1.10.md b/src/current/_includes/releases/v22.1/v22.1.10.md deleted file mode 100644 index 634300fce9d..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.10.md +++ /dev/null @@ -1,52 +0,0 @@ -## v22.1.10 - -Release Date: October 28, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

General changes

- -- Added three [cluster settings](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) in [#89774][#89774] to collect trace data for outlier executions with low overhead. This is only available in [v22.1](https://www.cockroachlabs.com/docs/releases/v22.1); in [v22.2]({% link releases/v22.2.md %}) and later we have other mechanisms to collect outlier traces. Traces come in handy when looking to investigate latency spikes, and these settings are intended to supplant most uses of `sql.trace.stmt.enable_threshold`. That setting enables verbose tracing for all statements with 100% probability which can cause a lot of overhead in production clusters, and also a lot of logging pressure. Instead we introduce the following: - - `trace.fingerprint` - - `trace.fingerprint.probability` - - `trace.fingerprint.threshold` - - Put together (all have to be set) they only enable tracing for the statement with the set hex-encoded fingerprint, and do so probabilistically (where the probability is whatever `trace.fingerprint.probability` is set to), logging it only if the latency threshold is exceeded (configured using `trace.fingerprint.threshold`). To obtain a hex-encoded fingerprint, look at the contents of `system.statement_statistics`. For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - SELECT encode(fingerprint_id, 'hex'), (statistics -> 'statistics' ->> 'cnt')::INT AS count, metadata ->> 'query' AS query FROM system.statement_statistics ORDER BY COUNT DESC limit 10; - ~~~ - - ~~~ - encode | count | query - -----------------+-------+-------------------------------------------------------------------------------------------------------------------- - 4e4214880f87d799 | 2680 | INSERT INTO history(h_c_id, h_c_d_id, h_c_w_id, h_d_id, h_w_id, h_amount, h_date, h_data) VALUES ($1, $2, __more6__) - ~~~ - -

Bug fixes

- -- The [Statements page](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page), [Transactions page](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page), and [Transaction Details page](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page#transaction-details-page) in the [DB console](https://www.cockroachlabs.com/docs/v22.1/ui-overview) now properly show the **Regions** and **Nodes** columns and filters for [multi-region clusters](https://www.cockroachlabs.com/docs/v22.1/multiregion-overview). [#89818][#89818] -- Fixed a bug which caused [`ALTER CHANGEFEED`](https://www.cockroachlabs.com/docs/v22.1/alter-changefeed) to fail if the changefeed was created with a cursor option and had been running for more than [`gc.ttlseconds`](https://www.cockroachlabs.com/docs/v22.1/configure-replication-zones#gc-ttlseconds). [#89399][#89399] -- Fixed a bug that caused internal errors in rare cases when running [common table expressions](https://www.cockroachlabs.com/docs/v22.1/common-table-expressions) (a.k.a. CTEs, or statements with `WITH` clauses). This bug was only present in [v22.2.0-beta.2](https://www.cockroachlabs.com/docs/releases/v22.2#v22-2-0-beta-2), [v22.2.0-beta.3]({% link releases/v22.2.md%}#v22-2-0-beta-3), [v21.2.16]({% link releases/v21.2.md %}#v21-2-16), and [v22.1.9]({% link releases/v22.1.md %}#v22-1-9). [#89854][#89854] -- Fixed a bug where it was possible for [leases](https://www.cockroachlabs.com/docs/v22.1/architecture/replication-layer#leases) to temporarily move outside of explicitly configured regions. This often happened during [load-based rebalancing](https://www.cockroachlabs.com/docs/v22.1/architecture/replication-layer#load-based-replica-rebalancing), something CockroachDB does continually across the cluster. Because of this, it was also possible to observe a continual rate of lease thrashing as leases moved out of configured zones, triggered rebalancing, and induced other leases to move out of the configured zone while the original set moved back, and so on. [#90013][#90013] -- Excluded [check constraints](https://www.cockroachlabs.com/docs/v22.1/check) of [hash-sharded indexes](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) from being invalidated when executing [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v22.1/import-into). [#89528][#89528] -- Fixed overlapping charts on the [Statement Details page](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-details-page). [#90090][#90090] -- `initial_scan_only` [changefeeds](https://www.cockroachlabs.com/docs/v22.1/create-changefeed#initial-scan) now ensure that all messages have successfully flushed to the sink prior to completion, instead of potentially missing messages. [#90293][#90293] -- Fixed a bug introduced in [v22.1.9]({% link releases/v22.1.md %}#v22-1-9) that caused nodes to refuse to run [jobs](https://www.cockroachlabs.com/docs/v22.1/show-jobs) under rare circumstances. [#90265][#90265] -- Fixed a bug that caused incorrect evaluation of [comparison expressions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#comparison-functions) involving [`TIME`](https://www.cockroachlabs.com/docs/v22.1/time) and [`INTERVAL`](https://www.cockroachlabs.com/docs/v22.1/interval) types, e.g., `col::TIME + '10 hrs'::INTERVAL' > '01:00'::TIME`. [#90370][#90370] - -

Contributors

- -This release includes 28 merged PRs by 21 authors. - -[#89399]: https://github.com/cockroachdb/cockroach/pull/89399 -[#89528]: https://github.com/cockroachdb/cockroach/pull/89528 -[#89774]: https://github.com/cockroachdb/cockroach/pull/89774 -[#89818]: https://github.com/cockroachdb/cockroach/pull/89818 -[#89854]: https://github.com/cockroachdb/cockroach/pull/89854 -[#90013]: https://github.com/cockroachdb/cockroach/pull/90013 -[#90090]: https://github.com/cockroachdb/cockroach/pull/90090 -[#90265]: https://github.com/cockroachdb/cockroach/pull/90265 -[#90293]: https://github.com/cockroachdb/cockroach/pull/90293 -[#90370]: https://github.com/cockroachdb/cockroach/pull/90370 diff --git a/src/current/_includes/releases/v22.1/v22.1.11.md b/src/current/_includes/releases/v22.1/v22.1.11.md deleted file mode 100644 index 0e9b594a508..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.11.md +++ /dev/null @@ -1,59 +0,0 @@ -## v22.1.11 - -Release Date: November 14, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Security updates

- -- HTTP API endpoints under the `/api/v2/` prefix now allow requests through when the cluster is running in insecure mode. When the cluster is running in insecure mode, requests to these endpoints will have the username set to `root`. [#87274][#87274] - -

SQL language changes

- -- Added a new [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `cloudstorage.azure.concurrent_upload_buffers` to configure the number of concurrent buffers used when uploading files to Azure. [#90449][#90449] - -

DB Console changes

- -- Requests to fetch table and database [statistics](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer#table-statistics) now have limited concurrency. This may make loading the [Databases page](https://www.cockroachlabs.com/docs/v22.1/ui-databases-page) slower, but in return should result in making those pages less disruptive. [#90575][#90575] -- Updated the filter labels from **App** to **Application Name** and from **Username** to **User Name** on the [SQL Activity page](https://www.cockroachlabs.com/docs/v22.1/ui-overview#sql-activity). [#91294][#91294] -- Fixed the filter and label style on the **Transactions** filter label on the [SQL Activity page](https://www.cockroachlabs.com/docs/v22.1/ui-overview#sql-activity). [#91319][#91319] -- Fixed the filters in the DB Console so that if the height of the filter is large, it will allow a scroll to reach **Apply**. [#90479][#90479] -- Added a horizontal scroll to the table on the **Explain Plan** tab under [**Statement Details**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page). [#91329][#91329] -- Fixed the filter height on the [Sessions page](https://www.cockroachlabs.com/docs/v22.1/ui-sessions-page) so that the full dropdown is viewable with scroll. [#91325][#91325] - -

Bug fixes

- -- Fixed an extremely rare out-of-bounds crash in the [protected timestamp](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer#protected-timestamps) subsystem. [#90452][#90452] -- Fixed the calculation of the `pg_attribute.attnum` column for indexes so that the `attnum` is always based on the order the column appears in the index. Also fixed the `pg_attribute` table so that it includes stored columns in secondary indexes. [#90728][#90728] -- [TTL job](https://www.cockroachlabs.com/docs/v22.1/row-level-ttl#view-scheduled-ttl-jobs) decoding error messages now correctly contain hex-encoded key bytes instead of hex-encoded key pretty-print output. [#90727][#90727] -- Fixed a bug where CockroachDB clusters running inside of a Docker container on macOS, when mounting a host filesystem into the container, could report the total available capacity calculation of the filesystem incorrectly. [#90868][#90868] -- Fixed the error `invalid uvarint length of 9` that could occur during TTL jobs. This bug could affect keys with secondary tenant prefixes, which affects CockroachDB {{ site.data.products.serverless }} clusters. [#90606][#90606] -- Previously, if a primary key name was a reserved SQL keyword, attempting to use the [`DROP CONSTRAINT, ADD CONSTRAINT`](https://www.cockroachlabs.com/docs/v22.1/drop-constraint#drop-and-add-a-primary-key-constraint) statements to change a primary key would result in a `constraint already exists` error. This is now fixed. [#91041][#91041] -- Fixed a bug where in large [multi-region clusters](https://www.cockroachlabs.com/docs/v22.1/multiregion-overview), it was possible for the leasing mechanism used for jobs to get caught in a live-lock scenario, which could result in jobs not being adopted. [#91066][#91066] -- Fixed a bug that caused incorrect results and internal errors when a [`LEFT JOIN`](https://www.cockroachlabs.com/docs/v22.1/joins) operated on a table with [virtual computed columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns). The bug only presented when the optimizer planned a "paired joiner". Only values of the virtual columns would be incorrect—they could be `NULL` when their correct value was not `NULL`. An internal error would occur in the same situation if the virtual column had a `NOT NULL` constraint. This bug has been present since v22.1.0. [#91017][#91017] - -

Performance improvements

- -- Loading the Databases page in the UI is now less expensive when there are a large number of databases and a large number of tables in each database and a large number of ranges in the cluster. [#91014][#91014] - -

Contributors

- -This release includes 34 merged PRs by 18 authors. - -[#87274]: https://github.com/cockroachdb/cockroach/pull/87274 -[#90449]: https://github.com/cockroachdb/cockroach/pull/90449 -[#90452]: https://github.com/cockroachdb/cockroach/pull/90452 -[#90479]: https://github.com/cockroachdb/cockroach/pull/90479 -[#90575]: https://github.com/cockroachdb/cockroach/pull/90575 -[#90606]: https://github.com/cockroachdb/cockroach/pull/90606 -[#90727]: https://github.com/cockroachdb/cockroach/pull/90727 -[#90728]: https://github.com/cockroachdb/cockroach/pull/90728 -[#90868]: https://github.com/cockroachdb/cockroach/pull/90868 -[#91014]: https://github.com/cockroachdb/cockroach/pull/91014 -[#91017]: https://github.com/cockroachdb/cockroach/pull/91017 -[#91041]: https://github.com/cockroachdb/cockroach/pull/91041 -[#91066]: https://github.com/cockroachdb/cockroach/pull/91066 -[#91294]: https://github.com/cockroachdb/cockroach/pull/91294 -[#91319]: https://github.com/cockroachdb/cockroach/pull/91319 -[#91325]: https://github.com/cockroachdb/cockroach/pull/91325 -[#91329]: https://github.com/cockroachdb/cockroach/pull/91329 diff --git a/src/current/_includes/releases/v22.1/v22.1.12.md b/src/current/_includes/releases/v22.1/v22.1.12.md deleted file mode 100644 index a7cb7b5d412..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.12.md +++ /dev/null @@ -1,101 +0,0 @@ -## v22.1.12 - -Release Date: December 12, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

General changes

- -- Bulk operations now log sanitized connection destinations, for example: - - `backup planning to connect to destination gs://test/backupadhoc?AUTH=specified&CREDENTIALS=redacted` [#92207][#92207] - -

{{ site.data.products.enterprise }} edition changes

- -- [Kafka sinks](https://www.cockroachlabs.com/docs/v22.1/changefeed-sinks#kafka-sink-configuration) can now (optionally) be configured with a "Compression" field to the `kafka_sink_config` option. This field can be set to `none` (default), `GZIP`, `SNAPPY`, `LZ4`, or `ZSTD`. Setting this field will determine the compression protocol used when emitting events. [#91276][#91276] - -

Operational changes

- -- Logs produced by setting an increased vmodule setting for s3_storage are now directed to the DEV channel rather than STDOUT. [#91960][#91960] - -- Introduced a metric (`replicas.leaders_invalid_lease`) that indicates how many replicas are Raft group leaders but holding invalid leases. [#91194][#91194] - -

DB Console changes

- -- Changed the height of the column selector, so it can hint there are more options to be selected once scrolled. [#91910][#91910] -- Added fingerprint ID in hex format to the [Statement Details](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) page and [Transaction Details](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page#transaction-details-page) page. [#91959][#91959] -- Updated the tooltip on `SQL Statement Errors` chart on Metrics page. [#92711][#92711] - -

Bug fixes

- -- Fixed a bug in which panics triggered by certain DDL statements were not properly recovered, leading to the cluster node crashing. [#91555][#91555] -- Fixed a panic that could occur when calling `st_distancespheroid` or `st_distancesphere` with a spatial object containing an NaN coordinate. This now produces an error, `input is out of range`. [#91634][#91634] -- Fixed a bug that resulted in some retriable errors not being retried during `IMPORT`. [#90432][#90432] -- Fixed a bug in `Concat` projection operators for arrays that could cause non-null values to be added to the array when one of the arguments was null. [#91653][#91653] -- Previously, `SET DEFAULT NULL` resulted in a column whose DefaultExpr is NULL. This is problematic when used with `ALTER COLUMN TYPE` where a temporary computed column will be created, hence violating validation that "a computed column cannot have default expression". This is now fixed by setting `DefaultExpr` to `nil` when `SET DEFAULT NULL`. [#91089][#91089] -- Fixed a bug introduced in "v21.2" that could cause an internal error in rare cases when a query required a constrained index scan to return results in order. [#91692][#91692] -- Fixed a bug which, in rare cases, could result in a [changefeed](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) missing rows occurring around the time of a split in writing transactions that take longer than the closed timestamp target duration (defaults to 3s). [#91749][#91749] -- Added leading zeros to fingerprint IDs with less than 16 characters. [#91959][#91959] -- Fixed a bug introduced in "v20.2" that could in rare cases cause filters to be dropped from a query plan with many joins. [#91654][#91654] -- Fixed an unhandled error that could happen if [`ALTER DEFAULT PRIVILEGES`](https://www.cockroachlabs.com/docs/v22.1/alter-default-privileges) was run on the system database. [#92083][#92083] -- Reduced the amount that `RESTORE` over-splits ranges. This is enabled by default. [#91141][#91141] -- Fixed a bug causing changefeeds to fail when a value is deleted while running on a non-primary [column family with multiple columns](https://www.cockroachlabs.com/docs/v22.1/changefeeds-on-tables-with-column-families). [#91953][#91953] -- Stripped quotation marks from database and table names to correctly query for index usage statistics. [#92282][#92282] -- Fixed the [statement activity](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-fingerprint-page) page so that it no longer shows multi-statement implicit transactions as "explicit." [#92430][#92430] -- Fixed a bug existing since "v20.2" that could cause incorrect results in rare cases for queries with inner joins and left joins. For the bug to occur, the left join had to be in the input of the inner join and the inner join filters had to reference both inputs of the left join, and not filter `NULL` values from the right input of the left join. Additionally, the right input of the left join had to contain at least one join, with one input not referenced by the left join's `ON` condition. [#92103][#92103] -- When configured to true, the `sql.metrics.statement_details.dump_to_logs` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) no longer causes a mutex deadlock. [#92278][#92278] -- Fixed incorrect cancellation logic when attempting to detect stuck rangefeeds. [#92702][#92702] -- Fixed an internal error when comparing a tuple type with a non-tuple type. [#92714][#92714] -- `attidentity` for `GENERATED BY DEFAULT AS IDENTITY` column should be `d`. [#92835][#92835] -- Previously, CockroachDB could incorrectly evaluate queries that performed left semi and left anti "virtual lookup" joins on tables in `pg_catalog` or `information_schema`. These join types can be planned when a subquery is used inside of a filter condition. The bug was introduced in v20.2.0 and is now fixed. [#92881][#92881] - -

Performance improvements

- -- To protect against unexpected situations where garbage collection would trigger too frequently, the GC score cooldown period has been lowered. The GC score ratio is computed from MVCC stats and uses ratio of live objects and estimated garbage age to estimate collectability of old data. The reduced score will trigger GC earlier, lowering interval between runs 3 times, giving 2 times reduced peak garbage usage at the expense of 30% increase of wasteful data scanning on constantly updated data. [#92816][#92816] -- CockroachDB in some cases now correctly incorporates the value of the `OFFSET` clause when determining the number of rows that need to be read when the `LIMIT` clause is also present. Note that there was no correctness issue here - only that extra unnecessary rows could be read. [#92839][#92839] -- [`SHOW BACKUP`](https://www.cockroachlabs.com/docs/v22.1/show-backup) on a backup containing several table descriptors is now more performant. [#93143][#93143] - -
- -

Contributors

- -This release includes 75 merged PRs by 37 authors. -We would like to thank the following contributors from the CockroachDB community: - -- quanuw (first-time contributor) - -
- -[#90432]: https://github.com/cockroachdb/cockroach/pull/90432 -[#91089]: https://github.com/cockroachdb/cockroach/pull/91089 -[#91141]: https://github.com/cockroachdb/cockroach/pull/91141 -[#91194]: https://github.com/cockroachdb/cockroach/pull/91194 -[#91276]: https://github.com/cockroachdb/cockroach/pull/91276 -[#91485]: https://github.com/cockroachdb/cockroach/pull/91485 -[#91555]: https://github.com/cockroachdb/cockroach/pull/91555 -[#91634]: https://github.com/cockroachdb/cockroach/pull/91634 -[#91653]: https://github.com/cockroachdb/cockroach/pull/91653 -[#91654]: https://github.com/cockroachdb/cockroach/pull/91654 -[#91692]: https://github.com/cockroachdb/cockroach/pull/91692 -[#91703]: https://github.com/cockroachdb/cockroach/pull/91703 -[#91749]: https://github.com/cockroachdb/cockroach/pull/91749 -[#91910]: https://github.com/cockroachdb/cockroach/pull/91910 -[#91953]: https://github.com/cockroachdb/cockroach/pull/91953 -[#91959]: https://github.com/cockroachdb/cockroach/pull/91959 -[#91960]: https://github.com/cockroachdb/cockroach/pull/91960 -[#92083]: https://github.com/cockroachdb/cockroach/pull/92083 -[#92103]: https://github.com/cockroachdb/cockroach/pull/92103 -[#92207]: https://github.com/cockroachdb/cockroach/pull/92207 -[#92278]: https://github.com/cockroachdb/cockroach/pull/92278 -[#92282]: https://github.com/cockroachdb/cockroach/pull/92282 -[#92430]: https://github.com/cockroachdb/cockroach/pull/92430 -[#92702]: https://github.com/cockroachdb/cockroach/pull/92702 -[#92711]: https://github.com/cockroachdb/cockroach/pull/92711 -[#92714]: https://github.com/cockroachdb/cockroach/pull/92714 -[#92816]: https://github.com/cockroachdb/cockroach/pull/92816 -[#92835]: https://github.com/cockroachdb/cockroach/pull/92835 -[#92839]: https://github.com/cockroachdb/cockroach/pull/92839 -[#92881]: https://github.com/cockroachdb/cockroach/pull/92881 -[#93143]: https://github.com/cockroachdb/cockroach/pull/93143 -[949e22e5c]: https://github.com/cockroachdb/cockroach/commit/949e22e5c -[ff54be2a7]: https://github.com/cockroachdb/cockroach/commit/ff54be2a7 diff --git a/src/current/_includes/releases/v22.1/v22.1.13.md b/src/current/_includes/releases/v22.1/v22.1.13.md deleted file mode 100644 index b6428654728..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.13.md +++ /dev/null @@ -1,58 +0,0 @@ -## v22.1.13 - -Release Date: January 9, 2023 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

DB Console changes

- -- Removed the feedback survey link from the DB Console. [#93278][#93278] -- Improved the readability of the [metric graph](https://www.cockroachlabs.com/docs/v22.1/ui-overview-dashboard) tooltip styling by preventing the content from collapsing. [#93929][#93929] -- Fixed a bug where a ts/query could return no data for graphs. This will now return data as the resolution has been adjusted to the sample size. [#93620][#93620] - -

Bug fixes

- -- Fixed a bug that could manifest as [restore](https://www.cockroachlabs.com/docs/v22.1/restore) queries hanging during execution due to slow listing calls in the presence of several backup files. [#93224][#93224] -- Fixed a bug where empty [`COPY`](https://www.cockroachlabs.com/docs/v22.1/copy-from) commands would not escape after an EOF character, or error if encountering `\.` with no input. [#93260][#93260] -- Fixed a bug where running multiple schema change statements in a single command using a driver that uses the extended pgwire protocol internally ([Npgsql](https://www.npgsql.org/) in .Net as an example) could lead to the error: `"attempted to update job for mutation 2, but job already exists with mutation 1"`. [#92304][#92304] -- Fixed a bug where the non-default [`NULLS` ordering](https://www.cockroachlabs.com/docs/v22.1/order-by), `NULLS LAST`, was ignored in [window](https://www.cockroachlabs.com/docs/v22.1/window-functions) and [aggregate](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#aggregate-functions) functions. This bug could cause incorrect query results when `NULLS LAST` was used. This bug had been introduced in v22.1.0. [#93600][#93600] -- Fixed an issue where `DISTINCT ON` queries would fail with the error `"SELECT DISTINCT ON expressions must match initial ORDER BY expressions"` when the query included an [`ORDER BY`](https://www.cockroachlabs.com/docs/v22.1/order-by) clause containing `ASC NULLS LAST` or `DESC NULLS FIRST`. [#93608][#93608] -- Previously, CockroachDB would error when receiving [`GEOMETRY` or `GEOGRAPHY`](https://www.cockroachlabs.com/docs/v22.1/spatial-glossary#data-types) types using binary parameters. This is now resolved. [#93686][#93686] -- Fixed a bug where the `session_id` [session variable](https://www.cockroachlabs.com/docs/v22.1/show-vars) would not be properly set if used from a subquery. [#93857][#93857] -- Server logs are now correctly fsynced at every syncInterval. [#93994][#93994] -- [`CREATE ROLE`](https://www.cockroachlabs.com/docs/v22.1/create-role), [`DROP ROLE`](https://www.cockroachlabs.com/docs/v22.1/drop-role), [`GRANT`](https://www.cockroachlabs.com/docs/v22.1/grant), and [`REVOKE`](https://www.cockroachlabs.com/docs/v22.1/revoke) statements no longer work when the transaction is in read-only mode. [#94104][#94104] -- The `stxnamespace`, `stxkind`, and `stxstattarget` columns are now defined in the [`pg_statistics_ext` system catalog](https://www.cockroachlabs.com/docs/v22.1/pg-catalog). [#94008][#94008] -- Fixed a bug where tables that receive writes concurrent with portions of an [`ALTER TABLE ... SET LOCALITY REGIONAL BY ROW`](https://www.cockroachlabs.com/docs/v22.1/set-locality) statement could fail with the error: `duplicate key value violates unique constraint "new_primary_key"`. This had been introduced in v22.1. [#94252][#94252] -- Previously, CockroachDB could encounter an internal error when evaluating [window functions](https://www.cockroachlabs.com/docs/v22.1/window-functions) with a `RANGE` window frame mode with an `OFFSET PRECEDING` or `OFFSET FOLLOWING` boundary when an `ORDER BY` clause has a `NULLS LAST` option. This will now result in a regular error since the feature is marked as unsupported. [#94351][#94351] -- Record types can now be encoded with the binary encoding of the PostgreSQL wire protocol. Previously, trying to use this encoding could cause a panic. [#94420][#94420] -- Fixed a bug that caused incorrect selectivity estimation for queries with ORed predicates all referencing a common single table. [#94439][#94439] - -

Performance improvements

- -- Improved the performance of [`crdb_internal.default_privileges`](https://www.cockroachlabs.com/docs/v22.1/crdb-internal) population. [#94338][#94338] - -

Contributors

- -This release includes 39 merged PRs by 21 authors. - - - -[#92304]: https://github.com/cockroachdb/cockroach/pull/92304 -[#93224]: https://github.com/cockroachdb/cockroach/pull/93224 -[#93260]: https://github.com/cockroachdb/cockroach/pull/93260 -[#93278]: https://github.com/cockroachdb/cockroach/pull/93278 -[#93600]: https://github.com/cockroachdb/cockroach/pull/93600 -[#93608]: https://github.com/cockroachdb/cockroach/pull/93608 -[#93620]: https://github.com/cockroachdb/cockroach/pull/93620 -[#93686]: https://github.com/cockroachdb/cockroach/pull/93686 -[#93712]: https://github.com/cockroachdb/cockroach/pull/93712 -[#93857]: https://github.com/cockroachdb/cockroach/pull/93857 -[#93929]: https://github.com/cockroachdb/cockroach/pull/93929 -[#93994]: https://github.com/cockroachdb/cockroach/pull/93994 -[#94008]: https://github.com/cockroachdb/cockroach/pull/94008 -[#94104]: https://github.com/cockroachdb/cockroach/pull/94104 -[#94252]: https://github.com/cockroachdb/cockroach/pull/94252 -[#94338]: https://github.com/cockroachdb/cockroach/pull/94338 -[#94351]: https://github.com/cockroachdb/cockroach/pull/94351 -[#94420]: https://github.com/cockroachdb/cockroach/pull/94420 -[#94439]: https://github.com/cockroachdb/cockroach/pull/94439 diff --git a/src/current/_includes/releases/v22.1/v22.1.14.md b/src/current/_includes/releases/v22.1/v22.1.14.md deleted file mode 100644 index 5c5d1ff2bba..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.14.md +++ /dev/null @@ -1,62 +0,0 @@ -## v22.1.14 - -Release Date: February 6, 2023 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

SQL language changes

- -- [`COPY`](https://www.cockroachlabs.com/docs/v22.1/copy-from) now logs an error during the insert phase on the [`SQL_EXEC`](https://www.cockroachlabs.com/docs/v22.1/logging#sql_exec) logging channel. [#95175][#95175] -- If `copy_from_retries_enabled` is set, [`COPY`](https://www.cockroachlabs.com/docs/v22.1/copy-from) is now able to retry under certain safe circumstances. For example, when `copy_from_atomic_enabled` is false, there is no transaction running `COPY` and the error returned is retriable. [#95505][#95505] -- `kv.bulkio.write_metadata_sst.enabled` now defaults to false. This change does not affect [`BACKUP`](https://www.cockroachlabs.com/docs/v22.1/backup) or [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore). [#96017][#96017] - -

DB Console changes

- -- Removed the [**Reset SQL stats**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [**Reset index stats**](https://www.cockroachlabs.com/docs/v22.1/ui-databases-page#index-details) buttons from the DB Console for non-admin users. [#95325][#95325] -- [Graphs](https://www.cockroachlabs.com/docs/v22.1/ui-overview-dashboard) can now be clicked on to toggle legend "stickiness" and make the points stop following the mouse. This makes it easier to read dense graphs with many series plotted together. [#94786][#94786] - -

Bug fixes

- -- Fixed a bug where, in a cluster with nodes running both [v22.2]({% link releases/v22.2.md %}) and v22.1, [range replica](https://www.cockroachlabs.com/docs/v22.1/ui-replication-dashboard#review-of-cockroachdb-terminology) changes could sometimes fail on v22.1 leaseholders with the error `change replicas of r47 failed: descriptor changed: [expected] != [actual]`, without any apparent differences between the listed descriptors. Continuing to upgrade all nodes to v22.2 or rolling all nodes back to v22.1 would resolve this issue. [#94841][#94841] -- It is now possible to run [`cockroach version`](https://www.cockroachlabs.com/docs/v22.2/cockroach-version) and [`cockroach start`](https://www.cockroachlabs.com/docs/v22.2/cockroach-start) (and possibly other sub-commands) when the user running the command does not have permission to access the current working directory. [#94926][#94926] -- Fixed a bug where [`CLOSE ALL`](https://www.cockroachlabs.com/docs/v22.1/sql-grammar#close_cursor_stmt) would not respect the `ALL` flag and would instead attempt to close a cursor with no name. [#95440][#95440] -- Fixed a crash that could happen when formatting a tuple with an unknown type. [#95422][#95422] -- Fixed a bug where a [database restore](https://www.cockroachlabs.com/docs/v22.1/restore) would not [grant](https://www.cockroachlabs.com/docs/v22.1/grant) `CREATE` and `USAGE` on the public schema to the public [role](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#users-and-roles). [#95537][#95537] -- Fixed a bug where [`pg_get_indexdef`](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) didn't show the expression used to define an [expression-based index](https://www.cockroachlabs.com/docs/v22.1/partial-indexes), as well as a bug where the function was incorrectly including columns stored by the index. [#95585][#95585] -- Fixed a bug where a DNS lookup was performed during gossip remote forwarding while holding the gossip mutex. This could cause processing stalls if the DNS server was slow to respond. [#95441][#95441] -- Fixed a bug where [`RESTORE SYSTEM USERS`](https://www.cockroachlabs.com/docs/v22.1/restore#restoring-users-from-system-users-backup) would fail to restore [role options](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#role-options). [#95295][#95295] -- Reduce contention between queries to register, deregister, and cancel [sessions](https://www.cockroachlabs.com/docs/v22.1/show-sessions). [#95654][#95654] -- Fixed a bug where a [backup](https://www.cockroachlabs.com/docs/v22.1/backup) of keys with many revisions would fail with `pebble: keys must be added in order`. [#95446][#95446] -- Fixed the `array_to_string` [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) so that nested arrays are traversed without printing [`ARRAY`](https://www.cockroachlabs.com/docs/v22.1/array) at each nesting level. [#95844][#95844] -- Fixed a bug that caused [ranges](https://www.cockroachlabs.com/docs/v22.1/architecture/overview#architecture-range) to remain without a leaseholder in cases of asymmetric [network partitions](https://www.cockroachlabs.com/docs/v22.1/cluster-setup-troubleshooting#network-partition). [#95237][#95237] -- Fixed a bug where [`COPY`](https://www.cockroachlabs.com/docs/v22.1/copy-from) into a column with [collated strings](https://www.cockroachlabs.com/docs/v22.1/collate) would result in an error similar to `internal error: unknown type collatedstring`. [#96039][#96039] -- Fixed a bug where spurious [transaction restarts](https://www.cockroachlabs.com/docs/v22.1/common-errors#restart-transaction) could occur when validating a [`FOREIGN KEY`](https://www.cockroachlabs.com/docs/v22.1/foreign-key) in the same transaction where the referenced table is modified. If the transaction was running at [`PRIORITY HIGH`](https://www.cockroachlabs.com/docs/v22.1/transactions#transaction-priorities), deadlocks could occur. [#96124][#96124] - -
- -

Contributors

- -This release includes 36 merged PRs by 22 authors. - -
- -[#94786]: https://github.com/cockroachdb/cockroach/pull/94786 -[#94841]: https://github.com/cockroachdb/cockroach/pull/94841 -[#94926]: https://github.com/cockroachdb/cockroach/pull/94926 -[#95175]: https://github.com/cockroachdb/cockroach/pull/95175 -[#95237]: https://github.com/cockroachdb/cockroach/pull/95237 -[#95295]: https://github.com/cockroachdb/cockroach/pull/95295 -[#95325]: https://github.com/cockroachdb/cockroach/pull/95325 -[#95422]: https://github.com/cockroachdb/cockroach/pull/95422 -[#95440]: https://github.com/cockroachdb/cockroach/pull/95440 -[#95441]: https://github.com/cockroachdb/cockroach/pull/95441 -[#95446]: https://github.com/cockroachdb/cockroach/pull/95446 -[#95505]: https://github.com/cockroachdb/cockroach/pull/95505 -[#95519]: https://github.com/cockroachdb/cockroach/pull/95519 -[#95537]: https://github.com/cockroachdb/cockroach/pull/95537 -[#95585]: https://github.com/cockroachdb/cockroach/pull/95585 -[#95654]: https://github.com/cockroachdb/cockroach/pull/95654 -[#95844]: https://github.com/cockroachdb/cockroach/pull/95844 -[#96017]: https://github.com/cockroachdb/cockroach/pull/96017 -[#96039]: https://github.com/cockroachdb/cockroach/pull/96039 -[#96124]: https://github.com/cockroachdb/cockroach/pull/96124 diff --git a/src/current/_includes/releases/v22.1/v22.1.15.md b/src/current/_includes/releases/v22.1/v22.1.15.md deleted file mode 100644 index d6b7e8182fd..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.15.md +++ /dev/null @@ -1,31 +0,0 @@ -## v22.1.15 - -Release Date: February 17, 2023 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

SQL language changes

- -- SQL queries running on remote nodes now show up in CPU profiles with `distsql.*` labels. Currently, these include `appname`, `gateway`, `txn`, and `stmt`. [#97055][#97055] - -

Bug fixes

- -- Fixed a bug where a node with a [disk stall](https://www.cockroachlabs.com/docs/v22.1/cluster-setup-troubleshooting#disk-stalls) would continue to accept new connections and preserve existing connections until the disk was no longer stalled. [#96369][#96369] -- Fixed a bug where a [disk stall](https://www.cockroachlabs.com/docs/v22.1/cluster-setup-troubleshooting#disk-stalls) could go undetected under the rare circumstance that several goroutines simultaneously sync the data directory. [#96666][#96666] -- Fixed a race condition where some operations waiting on locks could cause the lockholder [transaction](https://www.cockroachlabs.com/docs/v22.1/transactions) to be aborted if they occurred before the transaction could write its record. [#95215][#95215] -- Fixed a bug where the [**Statement Details** page](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-details-page) was unable to render. [#97057][#97057] - -
- -

Contributors

- -This release includes 21 merged PRs by 16 authors. - -
- -[#95215]: https://github.com/cockroachdb/cockroach/pull/95215 -[#96296]: https://github.com/cockroachdb/cockroach/pull/96296 -[#96369]: https://github.com/cockroachdb/cockroach/pull/96369 -[#96666]: https://github.com/cockroachdb/cockroach/pull/96666 -[#97055]: https://github.com/cockroachdb/cockroach/pull/97055 -[#97057]: https://github.com/cockroachdb/cockroach/pull/97057 diff --git a/src/current/_includes/releases/v22.1/v22.1.16.md b/src/current/_includes/releases/v22.1/v22.1.16.md deleted file mode 100644 index 44559976add..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.16.md +++ /dev/null @@ -1,43 +0,0 @@ -## v22.1.16 - -Release Date: March 3, 2023 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

{{ site.data.products.enterprise }} edition changes

- - -

SQL language changes

- -- Added a hard limit to the amount of data that can be flushed to system tables for SQL stats. [#97401][#97401] - -

Operational changes

- -- A [`BACKUP`](https://www.cockroachlabs.com/docs/v22.1/backup) which encounters too many retryable errors will now fail instead of pausing to allow subsequent backups the chance to succeed. [#96715][#96715] - -

Bug fixes

- -- Fixed a bug in [Enterprise changefeeds](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) where long-running initial scans will fail to generate a checkpoint. Failure to generate a checkpoint is particularly bad if the changefeed restarts for any reason. Without checkpoints, the changefeed will restart from the beginning, and in the worst case, when exporting substantially sized tables, changefeed initial scans may have hard time completing. [#97052][#97052] -- Fixed a bug where the [`SHOW GRANTS FOR public`](https://www.cockroachlabs.com/docs/v22.1/show-grants) command would return an error saying that the `public` role does not exist. [#96999][#96999] -- The following spammy log message was removed: `> lease [...] expired before being followed by lease [...]; foreground traffic may have been impacted`. [#97378][#97378] -- Fixed a bug in the query engine that could cause incorrect results in some cases when a [zigzag join](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer#zigzag-joins) was planned. The bug could occur when the two indexes used for the zigzag join had a suffix of matching columns but with different directions. For example, planning a zigzag join with `INDEX(a ASC, b ASC)` and `INDEX(c ASC, b DESC)` could cause incorrect results. This bug has existed since at least [v19.1](https://www.cockroachlabs.com/docs/releases#v19-1). The optimizer will no longer plan a zigzag join in such cases. [#97440][#97440] -- Added support for disabling cross-descriptor validation on lease renewal, which can be problematic when there are lots of descriptors with lots of foreign key references, in which cases, the cross-reference validation could starve schema changes. This can be enabled with `sql.catalog.descriptor_lease_renewal_cross_validation`. [#97644][#97644] -- Columns referenced in partial index predicates and partial unique constraint predicates can no longer be dropped. The [`ALTER TABLE .. DROP COLUMN`](https://www.cockroachlabs.com/docs/v22.1/drop-column) statement now returns an error with a hint suggesting to drop the indexes and constraints first. This is a temporary safe-guard to prevent users from hitting [#96924][#96924]. This restriction will be lifted when that bug is fixed. [#97663][#97663] - -
- -

Contributors

- -This release includes 16 merged PRs by 12 authors. - -
- -[#96924]: https://github.com/cockroachdb/cockroach/issues/96924 -[#96715]: https://github.com/cockroachdb/cockroach/pull/96715 -[#96999]: https://github.com/cockroachdb/cockroach/pull/96999 -[#97052]: https://github.com/cockroachdb/cockroach/pull/97052 -[#97378]: https://github.com/cockroachdb/cockroach/pull/97378 -[#97401]: https://github.com/cockroachdb/cockroach/pull/97401 -[#97440]: https://github.com/cockroachdb/cockroach/pull/97440 -[#97644]: https://github.com/cockroachdb/cockroach/pull/97644 -[#97663]: https://github.com/cockroachdb/cockroach/pull/97663 diff --git a/src/current/_includes/releases/v22.1/v22.1.17.md b/src/current/_includes/releases/v22.1/v22.1.17.md deleted file mode 100644 index b0077724364..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.17.md +++ /dev/null @@ -1,5 +0,0 @@ -## v22.1.17 - -Release Date: March 27, 2023 - -{% include releases/release-downloads-docker-image.md release=include.release %} \ No newline at end of file diff --git a/src/current/_includes/releases/v22.1/v22.1.18.md b/src/current/_includes/releases/v22.1/v22.1.18.md deleted file mode 100644 index 321501435b9..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.18.md +++ /dev/null @@ -1,90 +0,0 @@ -## v22.1.18 - -Release Date: March 28, 2023 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Security updates

- -- Previously, users could gain unauthorized access to [statement diagnostic bundles](https://www.cockroachlabs.com/docs/v22.1/ui-debug-pages#reports) they did not create if they requested the bundle through an HTTP request to `/_admin/v1/stmtbundle/` and correctly guessed its (non-secret) ID. This change locks down this endpoint behind the usual SQL gating that correctly uses the SQL user in the HTTP session as identified by their cookie. [#99055][#99055] -- Ensure that no unsanitized URIs or secret keys get written to the [jobs table](https://www.cockroachlabs.com/docs/v22.1/show-jobs) if the [backup](https://www.cockroachlabs.com/docs/v22.1/backup) fails. [#99265][#99265] - -

SQL language changes

- -- Increased the default value of [the `sql.stats.cleanup.rows_to_delete_per_txn` cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) to 10k, to increase efficiency of the cleanup job for [SQL statistics](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer#table-statistics). [#97722][#97722] -- Added support for the syntax [`CREATE DATABASE IF NOT EXISTS ... WITH OWNER`](https://www.cockroachlabs.com/docs/v22.1/create-database). [#97976][#97976] -- Added a new [session setting](https://www.cockroachlabs.com/docs/v22.1/set-vars#supported-variables), `optimizer_always_use_histograms`, which ensures that the optimizer always uses histograms when available to calculate the [statistics](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer#table-statistics) of every plan that it explores. Enabling this setting can prevent the optimizer from choosing a suboptimal [index](https://www.cockroachlabs.com/docs/v22.1/indexes) when statistics for a table are stale. [#98229][#98229] -- The [`SHOW DEFAULT PRIVILEGES`](https://www.cockroachlabs.com/docs/v22.1/show-default-privileges) command now has a column that says if the default privilege will give [the grant option](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#supported-privileges) to the grantee. [#98012][#98012] -- Added a new internal virtual table [`crdb_internal.node_memory_monitors`](https://www.cockroachlabs.com/docs/v22.1/crdb-internal). It exposes all of the current reservations with the [memory accounting system](https://www.cockroachlabs.com/docs/v22.1/ui-runtime-dashboard#memory-usage) on a single node. Access to the table requires [`VIEWACTIVITY` or `VIEWACTIVITYREDACTED` permissions](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#supported-privileges). [#98043][#98043] -- Fixed the help message on the [`UPDATE`](https://www.cockroachlabs.com/docs/v22.1/update) to correctly position the optional `FROM` clause in the help output. [#99293][#99293] - -

Command-line changes

- -- The `--drain-wait` argument to the [`cockroach node drain`](https://www.cockroachlabs.com/docs/v22.1/cockroach-node) command will be automatically increased if the command detects that it is smaller than the sum of the [cluster settings](https://www.cockroachlabs.com/docs/v22.1/node-shutdown#cluster-settings) `server.shutdown.drain_wait`, `server.shutdown.connection_wait`, `server.shutdown.query_wait` times two, and `server.shutdown.lease_transfer_wait`. If the `--drain-wait` argument is 0, then no timeout is used. This recommendation [was already documented](https://www.cockroachlabs.com/docs/v22.1/node-shutdown#drain-timeout), but now the advice will be applied automatically. [#98578][#98578] - -

DB Console changes

- -- Add the following new [metrics](https://www.cockroachlabs.com/docs/v22.1/metrics) to track memory usage of prepared statements in [sessions](https://www.cockroachlabs.com/docs/v22.1/show-sessions) [#97741][#97741]: - - `sql.mem.internal.session.prepared.current` - - `sql.mem.internal.session.prepared.max-avg` - - `sql.mem.internal.session.prepared.max-count` - - `sql.mem.internal.session.prepared.max-max` - - `sql.mem.internal.session.prepared.max-p50` - - `sql.mem.internal.session.prepared.max-p75` - - `sql.mem.internal.session.prepared.max-p90` - - `sql.mem.internal.session.prepared.max-p99` - - `sql.mem.internal.session.prepared.max-p99.9` - - `sql.mem.internal.session.prepared.max-p99.99` - - `sql.mem.internal.session.prepared.max-p99.999` - - `sql.mem.sql.session.prepared.current` - - `sql.mem.sql.session.prepared.max-avg` - - `sql.mem.sql.session.prepared.max-count` - - `sql.mem.sql.session.prepared.max-max` - - `sql.mem.sql.session.prepared.max-p50` - - `sql.mem.sql.session.prepared.max-p75` - - `sql.mem.sql.session.prepared.max-p90` - - `sql.mem.sql.session.prepared.max-p99` - - `sql.mem.sql.session.prepared.max-p99.9` - - `sql.mem.sql.session.prepared.max-p99.99` - - `sql.mem.sql.session.prepared.max-p99.999` - -

Bug fixes

- -- Previously, the [`ALTER TABLE ... INJECT STATISTICS`](https://www.cockroachlabs.com/docs/v22.1/alter-table) command would fail if a column with the [`COLLATED STRING` type](https://www.cockroachlabs.com/docs/v22.1/collate) had histograms to be injected, and this is now fixed. The bug has been present since at least [v21.2]({% link releases/v21.2.md %}). [#97492][#97492] -- Fixed a bug where CockroachDB could encounter an internal error `"no bytes in account to release ..."` in rare cases. The bug was introduced in [v22.1]({% link releases/v22.1.md %}). [#97774][#97774] -- [Transaction](https://www.cockroachlabs.com/docs/v22.1/transactions) uncertainty intervals are correctly configured for reverse scans again, ensuring that reverse scans cannot serve [stale reads](https://www.cockroachlabs.com/docs/v22.1/architecture/transaction-layer#stale-reads) when clocks in a cluster are skewed. [#97519][#97519] -- The owner of the [public schema](https://www.cockroachlabs.com/docs/v22.1/schema-design-overview#schemas) can now be changed. Use [`ALTER SCHEMA public OWNER TO new_owner`](https://www.cockroachlabs.com/docs/v22.1/alter-schema). [#98064][#98064] -- Fixed a bug in which [common table expressions](https://www.cockroachlabs.com/docs/v22.1/common-table-expressions) (CTEs) marked as `WITH RECURSIVE` which were not actually recursive could return incorrect results. This could happen if the CTE used a `UNION ALL`, because the [optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) incorrectly converted the `UNION ALL` to a `UNION`. This bug had existed since support for recursive CTEs was first added in [v20.1]({% link releases/v20.1.md %}). [#98114][#98114] -- Fixed a bug present since [v22.1]({% link releases/v22.1.md %}). When [rangefeed](https://www.cockroachlabs.com/docs/v22.1/create-and-configure-changefeeds#enable-rangefeeds) enablement overrides in span configs were introduced, rangefeed requests reached spans outside the [range](https://www.cockroachlabs.com/docs/v22.1/architecture/glossary#architecture-range), this did not cause range cache invalidation because the enablement settings were checked before determining if the span was within the range. Requests could repeatedly reach the same incorrect range, causing errors until cache invalidation or node restart. Now CockroachDB correctly checks that the span is within the range prior to checking the enablement settings, invalidating the cache when a request reaches an incorrect range and causing subsequent requests to successfully reach the correct range. [#97660][#97660] -- Fixed a bug that could crash the CockroachDB process when a query contained a literal [tuple expression](https://www.cockroachlabs.com/docs/v22.1/scalar-expressions#tuple-constructors) with more than two elements and only a single label, e.g., `((1, 2, 3) AS foo)`. [#98314][#98314] -- Allow users with the `VIEWACTIVITY`/`VIEWACTIVITYREDACTED` [permissions](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#role-options) to access the [`crdb_internal.ranges_no_leases`](https://www.cockroachlabs.com/docs/v22.1/crdb-internal) table, necessary to view important DB Console pages (specifically, the [Databases Page](https://www.cockroachlabs.com/docs/v22.1/ui-databases-page), including database details, and database tables). [#98646][#98646] -- Fixed a bug where using [`ST_Transform`](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#st_transform) could result in a memory leak. [#98835][#98835] -- Fixed a bug that caused incorrect results when comparisons of [tuples](https://www.cockroachlabs.com/docs/v22.1/scalar-expressions#tuple-constructors) were done using the `ANY` [operator](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#operators). For example, an expression like (x, y) = ANY (SELECT a, b FROM t WHERE ...) could return `true` instead of the correct result of `NULL` when `x` and `y` were `NULL`, or `a` and `b` were `NULL`. This could only occur if the [subquery is correlated](https://www.cockroachlabs.com/docs/v22.1/subqueries.html#correlated-subqueries), i.e., it references columns from the outer part of the query. This bug was present since the [cost-based optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) was introduced in [v2.1]({% link releases/v2.1.md %}). [#99161][#99161] - -

Contributors

- -This release includes 40 merged PRs by 27 authors. - - - -[#97492]: https://github.com/cockroachdb/cockroach/pull/97492 -[#97519]: https://github.com/cockroachdb/cockroach/pull/97519 -[#97660]: https://github.com/cockroachdb/cockroach/pull/97660 -[#97722]: https://github.com/cockroachdb/cockroach/pull/97722 -[#97741]: https://github.com/cockroachdb/cockroach/pull/97741 -[#97774]: https://github.com/cockroachdb/cockroach/pull/97774 -[#97976]: https://github.com/cockroachdb/cockroach/pull/97976 -[#98012]: https://github.com/cockroachdb/cockroach/pull/98012 -[#98043]: https://github.com/cockroachdb/cockroach/pull/98043 -[#98064]: https://github.com/cockroachdb/cockroach/pull/98064 -[#98114]: https://github.com/cockroachdb/cockroach/pull/98114 -[#98229]: https://github.com/cockroachdb/cockroach/pull/98229 -[#98314]: https://github.com/cockroachdb/cockroach/pull/98314 -[#98392]: https://github.com/cockroachdb/cockroach/pull/98392 -[#98578]: https://github.com/cockroachdb/cockroach/pull/98578 -[#98646]: https://github.com/cockroachdb/cockroach/pull/98646 -[#98835]: https://github.com/cockroachdb/cockroach/pull/98835 -[#99055]: https://github.com/cockroachdb/cockroach/pull/99055 -[#99161]: https://github.com/cockroachdb/cockroach/pull/99161 -[#99265]: https://github.com/cockroachdb/cockroach/pull/99265 -[#99293]: https://github.com/cockroachdb/cockroach/pull/99293 diff --git a/src/current/_includes/releases/v22.1/v22.1.19.md b/src/current/_includes/releases/v22.1/v22.1.19.md deleted file mode 100644 index 8eca4dfbf1d..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.19.md +++ /dev/null @@ -1,62 +0,0 @@ -## v22.1.19 - -Release Date: April 25, 2023 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

General changes

- -- Queries with invalid syntax are now logged at the `INFO` level on the [`SQL_EXEC` logging channel](https://www.cockroachlabs.com/docs/v22.1/sql-audit-logging). Previously, they were logged at the `ERROR` level. [#101090][#101090] - -

SQL language changes

- -- Added the `prepared_statements_cache_size` [session setting](https://www.cockroachlabs.com/docs/v22.1/set-vars) that helps to prevent [prepared statement](https://www.cockroachlabs.com/docs/v22.1/savepoint#savepoints-and-prepared-statements) leaks by automatically deallocating the least-recently-used prepared statements when the cache reaches a given size. [#99264][#99264] - -

DB Console changes

- -- New data is now auto-fetched every 5 minutes on the [**Statement and Transaction Fingerprints**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-fingerprint-page) pages. [#100702][#100702] - -

Bug fixes

- -- Previously, [`ADD COLUMN ... DEFAULT cluster_logical_timestamp()`](https://www.cockroachlabs.com/docs/v22.1/alter-table) would crash the node and leave the table in a corrupt state. The root cause is a `nil` pointer dereference. The bug is now fixed by returning an unimplemented error and hence disallowing using the [builtin function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#array-functions) as the default value when backfilling. [#99682][#99682] -- Fixed a bug where the stats columns on the [**Transaction Fingerprint Overview**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page#transaction-details-page) page was continuously incrementing. [#99405][#99405] -- Fixed a bug that prevented the [garbage collection](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer#garbage-collection) job for the [`TRUNCATE TABLE`](https://www.cockroachlabs.com/docs/v22.1/truncate) command from successfully finishing if the table descriptor had already been garbage collected. The garbage collection job now succeeds in this situation by handling the missing descriptor edge case. [#100146][#100146] -- Fixed a bug present in v21.1 that would cause the SQL gateway node to crash if a [view was created](https://www.cockroachlabs.com/docs/v22.1/create-view) with circular or self-referencing dependencies. [#100165][#100165] -- Fixed a bug in evaluation of `ANY`, `SOME`, and `ALL` [sub-operators](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#operators) that would cause expressions like `NULL = ANY(ARRAY[]::INT[])` to return `NULL` instead of `FALSE`. [#100363][#100363] -- Fixed a bug that could prevent a cached query with a [user-defined type](https://www.cockroachlabs.com/docs/v22.1/create-type) reference from being invalidated even after a [schema change](https://www.cockroachlabs.com/docs/v22.1/online-schema-changes) that should prevent the type from being resolved. [#100358][#100358] -- Fixed a bug existing before v22.1 that could cause a projected expression to replace column references with incorrect values. [#100368][#100368] -- Fixed a bug where the physical disk space of some tables could not be calculated. [#100937][#100937] -- Fixed a bug so that the [`crdb_internal.deserialize_session`](https://www.cockroachlabs.com/docs/v22.1/crdb-internal) function works properly with prepared statements that have more parameter type hints than parameters. [#101363][#101363] -- Fixed a bug where in [PostgreSQL Extended Query protocol](https://www.postgresql.org/docs/10/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY) mode it was possible for auto-commits to not execute certain logic for DDL, when certain DML ([`INSERT`](https://www.cockroachlabs.com/docs/v22.1/insert)/[`UPDATE`](https://www.cockroachlabs.com/docs/v22.1/update)/[`DELETE`](https://www.cockroachlabs.com/docs/v22.1/delete)) and DDL were combined in an implicit transaction. [#101630][#101630] -- In the [**DB Console SQL Activity**](https://www.cockroachlabs.com/docs/v22.1/ui-overview#sql-activity) pages, issuing a new request for stats while one is pending is now allowed and will replace the pending request. [#100702][#100702] -- Fixed a rare condition that could allow a [transaction](https://www.cockroachlabs.com/docs/v22.1/transactions) to get stuck indefinitely waiting on a released row-level [lock](https://www.cockroachlabs.com/docs/v22.1/architecture/transaction-layer#concurrency-control) if the per-range lock count limit was exceeded while the transaction was waiting on another lock. [#100944][#100944] -- Fixed a bug where CockroachDB incorrectly evaluated [`EXPORT`](https://www.cockroachlabs.com/docs/v22.1/export) statements that had a projection or rendering on top of the `EXPORT`. (For example, `WITH CTE AS (EXPORT INTO CSV 'nodelocal://1/export1/' FROM SELECT * FROM t) SELECT filename FROM CTE;` would not work.) Previously, such statements would result in panics or incorrect query results. [#101808][#101808] - -

Performance improvements

- -- Removed prettify usages that could cause out-of-memory (OOM) errors on the [**Statement Details**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [**Transaction Details**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) page. [#99452][#99452] - -
- -

Contributors

- -This release includes 38 merged PRs by 23 authors. - -
- -[#100146]: https://github.com/cockroachdb/cockroach/pull/100146 -[#100165]: https://github.com/cockroachdb/cockroach/pull/100165 -[#100358]: https://github.com/cockroachdb/cockroach/pull/100358 -[#100363]: https://github.com/cockroachdb/cockroach/pull/100363 -[#100368]: https://github.com/cockroachdb/cockroach/pull/100368 -[#100702]: https://github.com/cockroachdb/cockroach/pull/100702 -[#100937]: https://github.com/cockroachdb/cockroach/pull/100937 -[#100944]: https://github.com/cockroachdb/cockroach/pull/100944 -[#101090]: https://github.com/cockroachdb/cockroach/pull/101090 -[#101363]: https://github.com/cockroachdb/cockroach/pull/101363 -[#101630]: https://github.com/cockroachdb/cockroach/pull/101630 -[#101808]: https://github.com/cockroachdb/cockroach/pull/101808 -[#99264]: https://github.com/cockroachdb/cockroach/pull/99264 -[#99405]: https://github.com/cockroachdb/cockroach/pull/99405 -[#99452]: https://github.com/cockroachdb/cockroach/pull/99452 -[#99682]: https://github.com/cockroachdb/cockroach/pull/99682 diff --git a/src/current/_includes/releases/v22.1/v22.1.2.md b/src/current/_includes/releases/v22.1/v22.1.2.md deleted file mode 100644 index 6950367080b..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.2.md +++ /dev/null @@ -1,66 +0,0 @@ -## v22.1.2 - -Release Date: June 22, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Enterprise edition changes

- -- CSV is now a supported format for changefeeds. This only works with `initial_scan='only'` and does not work with diff/resolved options. [#82355][#82355] - -

SQL language changes

- -- The `bulkio.ingest.sender_concurrency_limit` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) can be used to adjust the concurrency at which any one SQL node, across all operations that it is running (e.g., [`RESTORE`s](https://www.cockroachlabs.com/docs/v22.1/restore), [`IMPORT`s](https://www.cockroachlabs.com/docs/v22.1/import), and [schema changes](https://www.cockroachlabs.com/docs/v22.1/online-schema-changes)), will send bulk ingest requests to the KV storage layer. [#81789][#81789] -- The `sql.ttl.range_batch_size` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) is deprecated. [#82711][#82711] -- The pgwire `DESCRIBE` command is now supported for use against a cursor created with the `DECLARE` command. This improves compatibility with PostgreSQL and is needed for compatibility with psycopg3 server-side cursors. [#82772][#82772] -- Fixed an issue where `SHOW BACKUP with privileges` displayed grant statements with incorrect syntax (specifically, without the object type). As an example, previously displayed: `GRANT ALL ON status TO j4;` Now displayed: `GRANT ALL ON TYPE status TO j4;` [#82727][#82727] -- Added the `spanconfig.kvsubscriber.update_behind_nanos` metric to track the latency between realtime and the last update handled by the `KVSubscriber`. This metric can be used to monitor the staleness of a node's view of reconciled `spanconfig` state. [#82895][#82895] - -

DB Console changes

- -- The time picker component has been improved such that users can use keyboard input to select a time without having to type `" (UTC)"`. [#82495][#82495] -- The time picker now opens directly to the custom time selection menu when a custom time is already selected. A "Preset Time Ranges" navigation link has been added to go to the preset options from the custom menu. [#82495][#82495] - -

Bug fixes

- -- The output of [`SHOW CREATE VIEW`](https://www.cockroachlabs.com/docs/v22.1/show-create#show-the-create-view-statement-for-a-view) now properly includes the keyword `MATERIALIZED` for materialized views. [#82196][#82196] -- Fixed the `identity_generation` column in the [`information_schema.columns`](https://www.cockroachlabs.com/docs/v22.1/information-schema#columns) table so its value is either `BY DEFAULT`, `ALWAYS`, or `NULL`. [#82184][#82184] -- Disk write probes during node liveness heartbeats will no longer get stuck on stalled disks, instead returning an error once the operation times out. Additionally, disk probes now run in parallel on nodes with multiple stores. [#81476][#81476] -- Fixed a bug where an unresponsive node (e.g., a node with a stalled disk) could prevent other nodes from acquiring its leases, effectively stalling these ranges until the node was shut down or recovered. [#81815][#81815] -- Fixed a crash that could happen when preparing a statement with unknown placeholder types. [#82647][#82647] -- Previously, when adding a column to a pre-existing table and adding a partial index referencing that column in the transaction, DML operations against the table while the schema change was ongoing would fail. Now these hazardous schema changes are not allowed. [#82668][#82668] -- Fixed a bug where CockroachDB would sometimes automatically retry the `BEGIN` statement of an explicit transaction. [#82681][#82681] -- Fixed a bug where draining/drained nodes could re-acquire leases during an import or an index backfill. [#80834][#80834] -- Fixed a bug where the startup of an internal component after a server restart could result in the delayed application of zone configuration. [#82858][#82858] -- Previously, using [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v22.1/as-of-system-time) of two different statements in the same line would result in an assertion error. This is now a PG error with code `0A000`. [#82654][#82654] -- Fixed a bug where KV requests, in particular export requests issued during a [backup](https://www.cockroachlabs.com/docs/v22.1/backup), were rejected incorrectly causing the backup to fail with a `batch timestamp must be after replica GC threshold` error. The requests were rejected on the pretext that their timestamp was below the [garbage collection threshold](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer#garbage-collection) of the key span. This was because the [protected timestamps](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer#protected-timestamps) were not considered when computing the garbage collection threshold for the key span being backed up. Protected timestamp records hold up the garbage collection threshold of a span during long-running operations such as backups to prevent revisions from being garbage collected. [#82757][#82757] - -
- -

Contributors

- -This release includes 54 merged PRs by 31 authors. -We would like to thank the following contributors from the CockroachDB community: - -- likzn (first-time contributor) - -
- -[#80834]: https://github.com/cockroachdb/cockroach/pull/80834 -[#81476]: https://github.com/cockroachdb/cockroach/pull/81476 -[#81789]: https://github.com/cockroachdb/cockroach/pull/81789 -[#81815]: https://github.com/cockroachdb/cockroach/pull/81815 -[#82184]: https://github.com/cockroachdb/cockroach/pull/82184 -[#82196]: https://github.com/cockroachdb/cockroach/pull/82196 -[#82355]: https://github.com/cockroachdb/cockroach/pull/82355 -[#82495]: https://github.com/cockroachdb/cockroach/pull/82495 -[#82647]: https://github.com/cockroachdb/cockroach/pull/82647 -[#82654]: https://github.com/cockroachdb/cockroach/pull/82654 -[#82668]: https://github.com/cockroachdb/cockroach/pull/82668 -[#82681]: https://github.com/cockroachdb/cockroach/pull/82681 -[#82711]: https://github.com/cockroachdb/cockroach/pull/82711 -[#82727]: https://github.com/cockroachdb/cockroach/pull/82727 -[#82772]: https://github.com/cockroachdb/cockroach/pull/82772 -[#82858]: https://github.com/cockroachdb/cockroach/pull/82858 -[#82895]: https://github.com/cockroachdb/cockroach/pull/82895 -[#82757]: https://github.com/cockroachdb/cockroach/pull/82757 diff --git a/src/current/_includes/releases/v22.1/v22.1.20.md b/src/current/_includes/releases/v22.1/v22.1.20.md deleted file mode 100644 index b39de5daf91..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.20.md +++ /dev/null @@ -1,34 +0,0 @@ -## v22.1.20 - -Release Date: May 12, 2023 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Bug fixes

- -- Fixed a rare bug where [replica rebalancing](https://www.cockroachlabs.com/docs/v22.1/architecture/replication-layer) during write heavy workloads could cause keys to be deleted unexpectedly from a [local store](https://www.cockroachlabs.com/docs/v22.1/cockroach-start#flags-store). [#102190][#102190] -- Fixed a bug introduced in v22.1.19, v22.2.8, and pre-release versions of 23.1 that could cause queries to return spurious insufficient [privilege](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#privileges) errors. For the bug to occur, two databases would need to have duplicate tables, each with a [foreign key](https://www.cockroachlabs.com/docs/v22.1/foreign-key) reference to another table. The error would then occur if the same SQL string was executed against both databases concurrently by users that have privileges over only one of the tables. [#102653][#102653] -- Fixed a bug where a [backup](https://www.cockroachlabs.com/docs/v22.1/backup-and-restore-overview) with a key's [revision history](https://www.cockroachlabs.com/docs/v22.1/take-backups-with-revision-history-and-restore-from-a-point-in-time) split across multiple [SST files](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer#ssts) may not have correctly restored the proper revision of the key. [#102372][#102372] -- Fixed a bug present since v21.1 that allowed values to be inserted into an [`ARRAY`](https://www.cockroachlabs.com/docs/v22.1/array)-type column that did not conform to the inner-type of the array. For example, it was possible to insert `ARRAY['foo']` into a column of type `CHAR(1)[]`. This could cause incorrect results when querying the table. The [`INSERT`](https://www.cockroachlabs.com/docs/v22.1/insert) now errors, which is expected. [#102811][#102811] -- Fixed a bug where [backup and restore](https://www.cockroachlabs.com/docs/v22.1/backup-and-restore-overview) would panic if the target is a synthetic public [schema](https://www.cockroachlabs.com/docs/v22.1/schema-design-overview), such as `system.public`. [#102783][#102783] -- Fixed an issue since v20.2.0 where running [`SHOW HISTOGRAM`](https://www.cockroachlabs.com/docs/v22.1/show-columns) to see the histogram for an [`ENUM`](https://www.cockroachlabs.com/docs/v22.1/enum)-type column would panic and crash the cockroach process. [#102829][#102829] - -

SQL language changes

- -- Added two views to the [`crdb_internal`](https://www.cockroachlabs.com/docs/v22.1/crdb-internal) catalog: `crdb_internal.statement_statistics_persisted`, which surfaces data in the persisted `system.statement_statistics` table, and `crdb_internal.transaction_statistics_persisted`, which surfaces the `system.transaction_statistics` table. [#99272][#99272] - -
- -

Contributors

- -This release includes 13 merged PRs by 14 authors. - -
- -[#102190]: https://github.com/cockroachdb/cockroach/pull/102190 -[#102372]: https://github.com/cockroachdb/cockroach/pull/102372 -[#102653]: https://github.com/cockroachdb/cockroach/pull/102653 -[#102783]: https://github.com/cockroachdb/cockroach/pull/102783 -[#102811]: https://github.com/cockroachdb/cockroach/pull/102811 -[#102829]: https://github.com/cockroachdb/cockroach/pull/102829 -[#99272]: https://github.com/cockroachdb/cockroach/pull/99272 diff --git a/src/current/_includes/releases/v22.1/v22.1.21.md b/src/current/_includes/releases/v22.1/v22.1.21.md deleted file mode 100644 index 5cdfd27c1c2..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.21.md +++ /dev/null @@ -1,30 +0,0 @@ -## v22.1.21 - -Release Date: June 5, 2023 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

DB Console changes

- -- If a page crashes, a force refresh is no longer required to be able to see the other pages on [DB Console](https://www.cockroachlabs.com/docs/v22.1/ui-overview). [#103326][#103326] -- On the [SQL Activity > Fingerprints](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#sql-statement-fingerprints) view, users will not see stats that have not yet been flushed to disk. [#103130][#103130] -- Users can now request top-k stmts by % runtime on the [SQL Activity > Fingerprints](https://www.cockroachlabs.com/docs/v22.1/ui-overview#sql-activity) view. [#103130][#103130] -- Added Search Criteria to the [Statements](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) pages. [#103130][#103130] - -

Bug fixes

- -- Fixed a bug in [closed timestamp](https://www.cockroachlabs.com/docs/v22.1/architecture/transaction-layer#closed-timestamps) updates within its side-transport. Previously, during [asymmetric network partitions](https://www.cockroachlabs.com/docs/v22.1/cluster-setup-troubleshooting#network-partition), a node that transfers a lease away, and misses a [liveness heartbeat](https://www.cockroachlabs.com/docs/v22.1/architecture/replication-layer#epoch-based-leases-table-data), could then erroneously update the closed timestamp during the stasis period of its liveness. This could lead to closed timestamp invariant violation, and node crashes. In extreme cases this could lead to inconsistencies in read-only queries. [#102597][#102597] -- The value of `pg_constraint.conparentid` is now `0` rather than `NULL`. CockroachDB does not support constraints on [partitions](https://www.cockroachlabs.com/docs/v22.1/partitioning). [#103230][#103230] - -
- -

Contributors

- -This release includes 7 merged PRs by 6 authors. - -
- -[#102597]: https://github.com/cockroachdb/cockroach/pull/102597 -[#103130]: https://github.com/cockroachdb/cockroach/pull/103130 -[#103230]: https://github.com/cockroachdb/cockroach/pull/103230 -[#103326]: https://github.com/cockroachdb/cockroach/pull/103326 diff --git a/src/current/_includes/releases/v22.1/v22.1.22.md b/src/current/_includes/releases/v22.1/v22.1.22.md deleted file mode 100644 index cf40a92e92f..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.22.md +++ /dev/null @@ -1,32 +0,0 @@ -## v22.1.22 - -Release Date: August 14, 2023 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

General changes

- -- Improved the error message that is logged when a changefeed is triggered by dropping the parent database or a type. The previous error was of the format `value type is not BYTES: UNKNOWN`. [#107943][#107943] - -

SQL language changes

- -- When no data is persisted to SQL statistics tables, such as when no flush has occurred or when flushing is disabled, the endpoint's combined view is shown, which includes in-memory data. [#104052][#104052] - -

Bug fixes

- -- Fixed a bug where `SHOW DEFAULT PRIVILEGES` returned no information for a database with uppercase or special characters in its name. [#103954][#103954] -- Fixed a bug that would result in corruption of [encrypted data at rest on a cluster node]({% link v23.1/encryption.md %}). If a node with this corrupted state was restarted, the node could fail to rejoin the cluster. If multiple nodes encountered this bug at the same time during roll out, the cluster could lose [quorum]({% link v23.1/architecture/replication-layer.md %}#overview). For more information, refer to [Technical Advisory 106617](https://www.cockroachlabs.com/docs/advisories/a106617).[#104141][#104141] -- Fixed a null-pointer exception introduced in v22.2.9 and v23.1.1 that could cause a node to crash when populating SQL Activity pages in the DB Console. [#104052][#104052] - -
- -

Contributors

- -This release includes 7 merged PRs by 9 authors. - -
- -[#103954]: https://github.com/cockroachdb/cockroach/pull/103954 -[#104052]: https://github.com/cockroachdb/cockroach/pull/104052 -[#104141]: https://github.com/cockroachdb/cockroach/pull/104141 -[#107943]: https://github.com/cockroachdb/cockroach/pull/107943 diff --git a/src/current/_includes/releases/v22.1/v22.1.3.md b/src/current/_includes/releases/v22.1/v22.1.3.md deleted file mode 100644 index f929bfb02f3..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.3.md +++ /dev/null @@ -1,137 +0,0 @@ -## v22.1.3 - -Release Date: July 11, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Enterprise edition changes

- -- Added the ability to provide short-lived OAuth 2.0 tokens as a form of short-lived credentials to Google Cloud Storage and KMS. The token can be passed to the GCS or KMS URI via the new `BEARER_TOKEN` parameter for "specified" authentication mode. - - Example GCS URI: `gs:///?AUTH=specified&BEARER_TOKEN=` - - Example KMS URI: `gs:///?AUTH=specified&BEARER_TOKEN=` - - There is no refresh mechanism associated with this token, so it is up to the user to ensure that its TTL is longer than the duration of the job or query that is using the token. The job or query may irrecoverably fail if one of its tokens expire before completion. [#83210][#83210] - -

SQL language changes

- -- CockroachDB now sends the `Severity_Nonlocalized` field in the `pgwire` Notice Response. [#82939][#82939] -- Updated the `pg_backend_pid()` [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) to match the data in the query cancellation key created during session initialization. This function is just for compatibility, and it does not return a real process ID. [#83167][#83167] -- The log fields for captured index usage statistics are no longer redacted [#83293][#83293] -- CockroachDB now returns a message instructing users to run hash-sharded index creation statements from a pre-v22.1 node, or just wait until the upgrade is finalized, when the cluster is in a mixed state during a rolling upgrade. Previously, we simply threw a descriptor validation error. [#83556][#83556] -- The [sampled query telemetry log](https://www.cockroachlabs.com/docs/v22.1/logging-overview#logging-destinations) now includes a plan gist field. The plan gist field provides a compact representation of a logical plan for the sampled query. The field is written as a base64-encoded string. [#83643][#83643] -- The error code reported when trying to use a system or [virtual column](https://www.cockroachlabs.com/docs/v22.1/computed-columns) in the `STORING` clause of an `INDEX` has been changed from `XXUUU (internal error)` to `0A000 (feature not supported)`. [#83648][#83648] -- [Foreign keys](https://www.cockroachlabs.com/docs/v22.1/foreign-key) can now reference the `crdb_region` column in [`REGIONAL BY ROW` tables](https://www.cockroachlabs.com/docs/v22.1/multiregion-overview#regional-tables) even if `crdb_region` is not explicitly part of a `UNIQUE` constraint. This is possible since `crdb_region` is implicitly included in every index on `REGIONAL BY ROW` tables as the partitioning key. This applies to whichever column is used as the partitioning column, in case a different name is used with a `REGIONAL BY ROW AS...` statement. [#83815][#83815] - -

Operational changes

- -- Disk stalls no longer prevent the CockroachDB process from crashing when `Fatal` errors are emitted. [#83127][#83127] -- Added a new [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `bulkio.backup.checkpoint_interval` which controls the minimum interval between writes of progress checkpoints to external storage. [#83266][#83266] -- The application name associated with a SQL session is no longer considered redactable information. [#83553][#83553] - -

Command-line changes

- -- The `cockroach demo` command now enables [rangefeeds](https://www.cockroachlabs.com/docs/v22.1/create-and-configure-changefeeds#enable-rangefeeds) by default. You can restore the previous behavior by starting the command with the `--auto-enable-rangefeeds=false` flag. [#83344][#83344] - -

DB Console changes

- -- The DB Console has a more helpful error message when the [**Jobs** page](https://www.cockroachlabs.com/docs/v22.1/ui-jobs-page) times out, and an information message appears after 2 seconds of loading and indicates that the loading might take a while. Previously, it would show the message `Promise timed out after 30000 ms`. [#82722][#82722] -- The **Statement Details** page was renamed to **Statement Fingerprint**. The [**Statement Fingerprint**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-fingerprint-page) page now shows charts for: Execution and Planning Time, Rows Processed, Execution Retries, Execution Count, and Contention. [#82960][#82960] -- The time interval component on the [**Statements**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [**Transactions**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) pages has been added to the **Statement Fingerprint** **Overview** and [**Explain Plans**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#explain-plans) tabs, and the [**Transaction Details**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page#transaction-details-page) page. [#82721][#82721] -- Added a confirmation modal to the `reset SQL Stats` button. [#83142][#83142] -- Application names and database names are now sorted in the dropdown menus. [#83334][#83334] -- A new single column called **Rows Processed**, displayed by default, combines the columns rows read and rows written on the **Statements** and **Transactions** pages. [#83511][#83511] -- The time interval selected on the [**Metrics**](https://www.cockroachlabs.com/docs/v22.1/ui-overview#metrics) page and the [**SQL Activity**](https://www.cockroachlabs.com/docs/v22.1/ui-overview#sql-activity) pages are now aligned. If the user changes the time interval on one page, the value will be the same for the other. [#83507][#83507] -- Added a label to the **Statement**, **Statement Fingerprint**, and **Transaction** pages, with information about the time interval for which we're showing information. The **Execution Stats** tab was removed from the **Statement Fingerprint** page. [#83333][#83333] -- Removed the 10 and 30 minute options on the **SQL Activity** page. [#83542][#83542] -- On the **Statements** page, users can no longer filter statements by searching for text in the `EXPLAIN` plan. [#83652][#83652] -- Updated the tooltips on the **Statements** and **Transactions** pages in the DB Console for improved user experience. [#83540][#83540] - -

Bug fixes

- -- Fixed a bug where, in earlier v22.1 releases, added validation could cause problems for descriptors which carried invalid back references due to a previous bug in v21.1. This stricter validation could result in a variety of query failures. CockroachDB now weakens the validation to permit the corruption. A subsequent fix in v22.2 is scheduled that will repair the invalid reference. [#82859][#82859] -- Added missing support for preparing a `DECLARE` cursor statement with placeholders. [#83001][#83001] -- CockroachDB now treats node unavailable errors as retry-able [changefeed](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) errors. [#82874][#82874] -- CockroachDB now ensures running changefeeds do not inhibit node shutdown. [#82874][#82874] -- **Last Execution** time now shows the correct value on **Statement Fingerprint** page. [#83114][#83114] -- CockroachDB now uses the proper multiplying factor to contention value on **Statement Details** page. [#82960][#82960] -- CockroachDB now prevents disabling [TTL](https://www.cockroachlabs.com/docs/v22.1/row-level-ttl) with `ttl = 'off'` to avoid conflicting with other TTL settings. To disable TTL, use `RESET (ttl)`. [#83216][#83216] -- Fixed a panic that could occur if the `inject_retry_errors_enabled` cluster setting is true and an `INSERT` is executed outside of an explicit transaction. [#83193][#83193] -- Previously, a user could be connected to a database but unable to see the metadata for that database in [`pg_catalog`](https://www.cockroachlabs.com/docs/v22.1/pg-catalog) if the user did not have privileges for the database. Now, users can always see the `pg_catalog` metadata for a database they are connected to (see [#59875](https://github.com/cockroachdb/cockroach/issues/59875)). [#83360][#83360] -- The **Statement Fingerprint** page now finds the stats when the `unset` application filter is selected. [#83334][#83334] -- Fixed a bug where no validation was performed when adding a [virtual computed column](https://www.cockroachlabs.com/docs/v22.1/computed-columns) which was marked `NOT NULL`. This meant that it was possible to have a virtual computed column with an active `NOT NULL` constraint despite having rows in the table for which the column was `NULL`. [#83353][#83353] -- Fixed the behavior of the [`soundex` function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#string-and-byte-functions) when passed certain Unicode inputs. Previously, certain Unicode inputs could result in crashes, errors, or incorrect output. [#83435][#83435] -- Fixed a bug where a lock could be held for a long period of time when adding a new column to a table (or altering a column type). This contention could make the [**Jobs** page](https://www.cockroachlabs.com/docs/v22.1/ui-jobs-page) non-responsive and job adoption slow. [#83306][#83306] -- Fixed a bug where a panic could occur during server startup when restarting a node which is running a garbage collection job. [#83474][#83474] -- The period selected on the **Metrics** page time picker is preserved when refreshing the page, and no longer changes to a custom period. [#83507][#83507] -- Changefeeds no longer error out when attempting to checkpoint during intermediate pause-requested or cancel-requested states. [#83569][#83569] -- CockroachDB now retries S3 operations when they error out with a read connection reset error instead of failing the top-level job. [#83581][#83581] -- The **Statements** table for a transaction in the **Transaction Details** page now shows the correct number of statements for a transaction. [#83651][#83651] -- Fixed a bug that prevented [partial indexes](https://www.cockroachlabs.com/docs/v22.1/partial-indexes) from being used in some query plans. For example, a partial index with a predicate `WHERE a IS NOT NULL` was not previously used if `a` was a `NOT NULL` column. [#83241][#83241] -- Index joins now consider functional dependencies from their input when determining equivalent columns instead of returning an internal error. [#83549][#83549] -- An error message that referred to a non-existent cluster setting now refers to the correct cluster setting: `bulkio.backup.deprecated_full_backup_with_subdir.enabled`. [#81976][#81976] -- Previously, the `CREATE` statement for the [`crdb_internal.cluster_contended_keys` view](https://www.cockroachlabs.com/docs/v22.1/crdb-internal) was missing the `crdb_internal.table_indexes.descriptor_id = crdb_internal.cluster_contention_events.table_id` `JOIN` condition, resulting in the view having more rows than expected. Now, the view properly joins the `crdb_internal.cluster_contention_events` and `crdb_internal.table_indexes` tables with all necessary `JOIN` conditions. [#83523][#83523] -- Fixed a bug where `ADD COLUMN` or `DROP COLUMN` statements with the legacy schema changer could fail on tables with large rows due to exceeding the Raft command maximum size. [#83816][#83816] - -

Performance improvements

- -- This release significantly improves the performance of [`IMPORT` statements](https://www.cockroachlabs.com/docs/v22.1/import) when the source is producing data not sorted by the destination table's primary key, especially if the destination table has a very large primary key with lots of columns. [#82746][#82746] -- [Decommissioning nodes](https://www.cockroachlabs.com/docs/v22.1/node-shutdown) is now substantially faster, particularly for small to moderately loaded nodes. [#82680][#82680] -- Queries with filters containing tuples in `= ANY` expressions, such as `(a, b) = ANY(ARRAY[(1, 10), (2, 20)])`, are now index accelerated. [#83467][#83467] -- Fixed a bug where it was possible to accrue [MVCC](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer#mvcc) garbage for much longer than needed. [#82967][#82967] - -

Contributors

- -This release includes 70 merged PRs by 36 authors. - - -[#81976]: https://github.com/cockroachdb/cockroach/pull/81976 -[#82680]: https://github.com/cockroachdb/cockroach/pull/82680 -[#82721]: https://github.com/cockroachdb/cockroach/pull/82721 -[#82722]: https://github.com/cockroachdb/cockroach/pull/82722 -[#82746]: https://github.com/cockroachdb/cockroach/pull/82746 -[#82859]: https://github.com/cockroachdb/cockroach/pull/82859 -[#82874]: https://github.com/cockroachdb/cockroach/pull/82874 -[#82939]: https://github.com/cockroachdb/cockroach/pull/82939 -[#82960]: https://github.com/cockroachdb/cockroach/pull/82960 -[#82967]: https://github.com/cockroachdb/cockroach/pull/82967 -[#83001]: https://github.com/cockroachdb/cockroach/pull/83001 -[#83114]: https://github.com/cockroachdb/cockroach/pull/83114 -[#83127]: https://github.com/cockroachdb/cockroach/pull/83127 -[#83142]: https://github.com/cockroachdb/cockroach/pull/83142 -[#83167]: https://github.com/cockroachdb/cockroach/pull/83167 -[#83193]: https://github.com/cockroachdb/cockroach/pull/83193 -[#83210]: https://github.com/cockroachdb/cockroach/pull/83210 -[#83216]: https://github.com/cockroachdb/cockroach/pull/83216 -[#83241]: https://github.com/cockroachdb/cockroach/pull/83241 -[#83266]: https://github.com/cockroachdb/cockroach/pull/83266 -[#83293]: https://github.com/cockroachdb/cockroach/pull/83293 -[#83306]: https://github.com/cockroachdb/cockroach/pull/83306 -[#83333]: https://github.com/cockroachdb/cockroach/pull/83333 -[#83334]: https://github.com/cockroachdb/cockroach/pull/83334 -[#83344]: https://github.com/cockroachdb/cockroach/pull/83344 -[#83353]: https://github.com/cockroachdb/cockroach/pull/83353 -[#83360]: https://github.com/cockroachdb/cockroach/pull/83360 -[#83435]: https://github.com/cockroachdb/cockroach/pull/83435 -[#83467]: https://github.com/cockroachdb/cockroach/pull/83467 -[#83474]: https://github.com/cockroachdb/cockroach/pull/83474 -[#83507]: https://github.com/cockroachdb/cockroach/pull/83507 -[#83511]: https://github.com/cockroachdb/cockroach/pull/83511 -[#83523]: https://github.com/cockroachdb/cockroach/pull/83523 -[#83540]: https://github.com/cockroachdb/cockroach/pull/83540 -[#83542]: https://github.com/cockroachdb/cockroach/pull/83542 -[#83549]: https://github.com/cockroachdb/cockroach/pull/83549 -[#83553]: https://github.com/cockroachdb/cockroach/pull/83553 -[#83556]: https://github.com/cockroachdb/cockroach/pull/83556 -[#83569]: https://github.com/cockroachdb/cockroach/pull/83569 -[#83581]: https://github.com/cockroachdb/cockroach/pull/83581 -[#83624]: https://github.com/cockroachdb/cockroach/pull/83624 -[#83643]: https://github.com/cockroachdb/cockroach/pull/83643 -[#83648]: https://github.com/cockroachdb/cockroach/pull/83648 -[#83651]: https://github.com/cockroachdb/cockroach/pull/83651 -[#83652]: https://github.com/cockroachdb/cockroach/pull/83652 -[#83789]: https://github.com/cockroachdb/cockroach/pull/83789 -[#83815]: https://github.com/cockroachdb/cockroach/pull/83815 -[#83816]: https://github.com/cockroachdb/cockroach/pull/83816 -[7449ad418]: https://github.com/cockroachdb/cockroach/commit/7449ad418 diff --git a/src/current/_includes/releases/v22.1/v22.1.4.md b/src/current/_includes/releases/v22.1/v22.1.4.md deleted file mode 100644 index 1ad2fb956da..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.4.md +++ /dev/null @@ -1,76 +0,0 @@ -## v22.1.4 - -Release Date: July 19, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Security updates

- -- Added access control checks to three [multi-region related built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#multi-region-functions). [#83986][#83986] - -

SQL language changes

- -- `crdb_internal.validate_ttl_scheduled_jobs` and `crdb_internal.repair_ttl_table_scheduled_job` can now only be run by users with the [`admin` role](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#admin-role). [#83972][#83972] -- `txn_fingerprint_id` has been added to `crdb_internal.node_statement_statistics`. The type of the column is `NULL` or `STRING`. [#84020][#84020] -- The [sampled query telemetry log](https://www.cockroachlabs.com/docs/v22.1/logging-overview#logging-destinations) now includes session, transaction, and statement IDs, as well as the database name of the query. [#84026][#84026] -- `crdb_internal.compact_engine_spans` can now only be run by users with the [`admin` role](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#admin-role). [#84095][#84095] - -

DB Console changes

- -- Updated `User` column name to `User Name` and fixed `High-water Timestamp` column tooltip on the **Jobs** page. [#83914][#83914] -- Added the ability to search for exact terms in order when wrapping a search in quotes. [#84113][#84113] - -

Bug fixes

- -- A flush message sent during portal execution in the `pgwire` extended protocol no longer results in an error. [#83955][#83955] -- Previously, [virtual computed columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns) which were marked as `NOT NULL` could be added to new [secondary indexes](https://www.cockroachlabs.com/docs/v22.1/indexes). Now, attempts to add such columns to a secondary index will result in an error. Note that such invalid columns can still be added to tables. Work to resolve that bug is tracked in #[81675](https://github.com/cockroachdb/cockroach/issues/81675). [#83551][#83551] -- Statement and transaction statistics are now properly recorded for implicit transactions with multiple statements. [#84020][#84020] -- The `SessionTransactionReceived` session phase time is no longer recorded incorrectly (which caused large transaction times to appear in the Console) and has been renamed to `SessionTransactionStarted`. [#84030][#84030] -- Fixed a rare issue where the failure to apply a [Pebble](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer#pebble) manifest change (typically due to block device failure or unavailability) could result in an incorrect [LSM](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer#log-structured-merge-trees) state. Such a state would likely result in a panic soon after the failed application. This change alters the behavior of Pebble to panic immediately in the case of a failure to apply a change to the manifest. [#83735][#83735] -- Fixed a bug which could crash nodes when visiting the [DB Console Statements](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) page. This bug was present since version v21.2.0. [#83714][#83714] -- Moved connection OK log and metric to same location after auth completes for consistency. This resolves an inconsistency (see linked issue) in the DB Console where the log and metric did not match. [#84103][#84103] -- CockroachDB previously would not normalize `timestamp/timestamptz - timestamp/timestamptz` like PostgreSQL does in some cases (depending on the query). This is now fixed. [#83999][#83999] -- Custom time period selection is now aligned between the [Metrics](https://www.cockroachlabs.com/docs/v22.1/ui-overview-dashboard) and [SQL Activity](https://www.cockroachlabs.com/docs/v22.1/ui-overview#sql-activity) pages in the DB Console. [#84184][#84184] -- Fixed a critical bug (#[83687](https://github.com/cockroachdb/cockroach/issues/83687)) introduced in v22.1.0 where a failure to transfer a [lease](https://www.cockroachlabs.com/docs/v22.1/architecture/replication-layer#leases) in the joint config may result in range unavailability. The fix allows the original [leaseholder](https://www.cockroachlabs.com/docs/v22.1/architecture/replication-layer#leases) to reacquire the lease so that lease transfer can be retried. [#84145][#84145] -- Fixed a minor bug that caused internal errors and poor index recommendations when running [`EXPLAIN`](https://www.cockroachlabs.com/docs/v22.1/explain) statements. [#84220][#84220] -- Fixed a bug where [`ALTER TABLE ... SET LOCALITY REGIONAL BY ROW`](https://www.cockroachlabs.com/docs/v22.1/set-locality#set-the-table-locality-to-regional-by-row) could leave the region `ENUM` type descriptor unaware of a dependency on the altered table. This would, in turn, wrongly permit a `DROP REGION` to succeed, rendering the table unusable. Note that this fix does not help existing clusters which have already run such an `ALTER TABLE`; see #[84322](https://github.com/cockroachdb/cockroach/issues/84322) for more information on this case. [#84339][#84339] -- Fixed a bug that could cause internal errors in rare cases when running queries with [`GROUP BY`](https://www.cockroachlabs.com/docs/v22.1/select-clause#create-aggregate-groups) clauses. [#84307][#84307] -- Fixed a bug in transaction conflict resolution which could allow backups to wait on long-running transactions. [#83900][#83900] -- Fixed an internal error `node ... with MaxCost added to the memo` that could occur during planning when calculating the cardinality of an outer join when one of the inputs had 0 rows. [#84377][#84377] - -

Known limitations

- -- A performance regression exists for v22.1.4 and v22.1.5 that causes [DB Console Metrics pages](https://www.cockroachlabs.com/docs/v21.2/ui-overview-dashboard) to fail to load, or to load slower than expected, when attempting to display metrics graphs. This regression is fixed in CockroachDB v22.1.6. [#85636](https://github.com/cockroachdb/cockroach/issues/85636) - -

Contributors

- -This release includes 42 merged PRs by 26 authors. - -[#81721]: https://github.com/cockroachdb/cockroach/pull/81721 -[#83551]: https://github.com/cockroachdb/cockroach/pull/83551 -[#83714]: https://github.com/cockroachdb/cockroach/pull/83714 -[#83735]: https://github.com/cockroachdb/cockroach/pull/83735 -[#83878]: https://github.com/cockroachdb/cockroach/pull/83878 -[#83900]: https://github.com/cockroachdb/cockroach/pull/83900 -[#83914]: https://github.com/cockroachdb/cockroach/pull/83914 -[#83955]: https://github.com/cockroachdb/cockroach/pull/83955 -[#83972]: https://github.com/cockroachdb/cockroach/pull/83972 -[#83986]: https://github.com/cockroachdb/cockroach/pull/83986 -[#83999]: https://github.com/cockroachdb/cockroach/pull/83999 -[#84020]: https://github.com/cockroachdb/cockroach/pull/84020 -[#84026]: https://github.com/cockroachdb/cockroach/pull/84026 -[#84030]: https://github.com/cockroachdb/cockroach/pull/84030 -[#84077]: https://github.com/cockroachdb/cockroach/pull/84077 -[#84095]: https://github.com/cockroachdb/cockroach/pull/84095 -[#84103]: https://github.com/cockroachdb/cockroach/pull/84103 -[#84111]: https://github.com/cockroachdb/cockroach/pull/84111 -[#84113]: https://github.com/cockroachdb/cockroach/pull/84113 -[#84145]: https://github.com/cockroachdb/cockroach/pull/84145 -[#84184]: https://github.com/cockroachdb/cockroach/pull/84184 -[#84220]: https://github.com/cockroachdb/cockroach/pull/84220 -[#84307]: https://github.com/cockroachdb/cockroach/pull/84307 -[#84339]: https://github.com/cockroachdb/cockroach/pull/84339 -[#84377]: https://github.com/cockroachdb/cockroach/pull/84377 -[2a2e5fcb3]: https://github.com/cockroachdb/cockroach/commit/2a2e5fcb3 -[47ba66bed]: https://github.com/cockroachdb/cockroach/commit/47ba66bed -[d87250cef]: https://github.com/cockroachdb/cockroach/commit/d87250cef diff --git a/src/current/_includes/releases/v22.1/v22.1.5.md b/src/current/_includes/releases/v22.1/v22.1.5.md deleted file mode 100644 index 4acb5f8441f..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.5.md +++ /dev/null @@ -1,62 +0,0 @@ -## v22.1.5 - -Release Date: July 28, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

SQL language changes

- -- [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v22.1/as-of-system-time) now takes the time zone into account when converting to UTC. For example: `2022-01-01 08:00:00-04:00` is now treated the same as `2022-01-01 12:00:00` instead of being interpreted as `2022-01-01 08:00:00` [#84663][#84663] - -

DB Console changes

- -- Updated labels from "date range" to "time interval" on time picker (custom option, preset title, previous and next arrows) [#84517][#84517] -- Removed `View Statement Details` link inside the [**Session Details**](https://www.cockroachlabs.com/docs/v22.1/ui-sessions-page) page. [#84502][#84502] -- Updated the message when there is no data on the selected time interval on the [**Statements**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [**Transactions**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) pages. [#84623][#84623] - -

Bug fixes

- -- Fixed a conversion on the jobs endpoint, so that the [**Jobs**](https://www.cockroachlabs.com/docs/v22.1/ui-jobs-page) page won't return a `500` error when a job contained an error with quotes. [#84464][#84464] -- The 'Parse', 'Bind', and 'Execute' `pgwire` commands now return an error if they are used during an aborted transaction. [`COMMIT`](https://www.cockroachlabs.com/docs/v22.1/commit-transaction) and [`ROLLBACK`](https://www.cockroachlabs.com/docs/v22.1/rollback-transaction) statements are still allowed during an aborted transaction. [#84329][#84329] -- Sorting on the plans table inside the [**Statement Details**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-details-page) page is now properly working. [#84627][#84627] -- Fixed a bug that could cause [unique indexes](https://www.cockroachlabs.com/docs/v22.1/unique) to be unexpectedly dropped after running an [`ALTER PRIMARY KEY`](https://www.cockroachlabs.com/docs/v22.1/alter-primary-key) statement, if the new PK column set is a subset of the old PK column set.[#84570][#84570] -- Fixed a bug where some statements in a batch would not get executed if the following conditions were met: - - A batch of statements is sent in a single string. - - A [`BEGIN`](https://www.cockroachlabs.com/docs/v22.1/begin-transaction) statement appears in the middle of the batch. - - The `enable_implicit_transaction_for_batch_statements` [session variable](https://www.cockroachlabs.com/docs/v22.1/set-vars) is set to `true`. (This defaults to false in v22.1) - This bug was introduced in v22.1.2. [#84593][#84593] -- Previously, CockroachDB could deadlock when evaluating analytical queries if multiple queries had to [spill to disk](https://www.cockroachlabs.com/docs/v22.1/vectorized-execution#disk-spilling-operations) at the same time. This is now fixed by making some of the queries error out instead. If you know that there is no deadlock and that some analytical queries that have spilled are just taking too long, blocking other queries from spilling, you can adjust newly introduced `sql.distsql.acquire_vec_fds.max_retries` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) (use `0` to enable the previous behavior of indefinite waiting until spilling resources open up). [#84657][#84657] -- Fixes a bug where cluster restores of older backups would silently clobber system tables or fail to complete. [#84904][#84904] -- Fixed a bug that was introduced in v21.2 that could cause increased memory usage when scanning a table with wide rows. [#83966][#83966] -- Fixed a bug in the `concat` projection operator on arrays that gave output of nulls when the projection operator can actually handle null arguments and may result in a non-null output. [#84615][#84615] -- Reduced foreground latency impact when performing changefeed backfills by adjusting `changefeed.memory.per_changefeed_limit` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) to 128MiB (Enterprise only). [#84702][#84702] - -

Known limitations

- -- A performance regression exists for v22.1.4 and v22.1.5 that causes [DB Console Metrics pages](https://www.cockroachlabs.com/docs/v21.2/ui-overview-dashboard) to fail to load, or to load slower than expected, when attempting to display metrics graphs. This regression is fixed in CockroachDB v22.1.6. [#85636](https://github.com/cockroachdb/cockroach/issues/85636) - -

Contributors

- -This release includes 30 merged PRs by 17 authors. - -[#83966]: https://github.com/cockroachdb/cockroach/pull/83966 -[#84269]: https://github.com/cockroachdb/cockroach/pull/84269 -[#84329]: https://github.com/cockroachdb/cockroach/pull/84329 -[#84354]: https://github.com/cockroachdb/cockroach/pull/84354 -[#84464]: https://github.com/cockroachdb/cockroach/pull/84464 -[#84472]: https://github.com/cockroachdb/cockroach/pull/84472 -[#84502]: https://github.com/cockroachdb/cockroach/pull/84502 -[#84517]: https://github.com/cockroachdb/cockroach/pull/84517 -[#84570]: https://github.com/cockroachdb/cockroach/pull/84570 -[#84593]: https://github.com/cockroachdb/cockroach/pull/84593 -[#84615]: https://github.com/cockroachdb/cockroach/pull/84615 -[#84623]: https://github.com/cockroachdb/cockroach/pull/84623 -[#84627]: https://github.com/cockroachdb/cockroach/pull/84627 -[#84657]: https://github.com/cockroachdb/cockroach/pull/84657 -[#84663]: https://github.com/cockroachdb/cockroach/pull/84663 -[#84702]: https://github.com/cockroachdb/cockroach/pull/84702 -[#84726]: https://github.com/cockroachdb/cockroach/pull/84726 -[#84857]: https://github.com/cockroachdb/cockroach/pull/84857 -[#84858]: https://github.com/cockroachdb/cockroach/pull/84858 -[#84904]: https://github.com/cockroachdb/cockroach/pull/84904 -[0ac3ee0ca]: https://github.com/cockroachdb/cockroach/commit/0ac3ee0ca diff --git a/src/current/_includes/releases/v22.1/v22.1.6.md b/src/current/_includes/releases/v22.1/v22.1.6.md deleted file mode 100644 index 103671cab85..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.6.md +++ /dev/null @@ -1,134 +0,0 @@ -## v22.1.6 - -Release Date: August 23, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Security updates

- -- [Client certificates](https://www.cockroachlabs.com/docs/v22.1/authentication#client-authentication) now have tenant scoping, which allows an operator to authenticate a client to a specific tenant. A tenant-scoped client certificate contains the client name in the CN and the tenant ID in the URIs section of the Subject Alternative Name (SAN) values. The format of the URI SAN is `crdb://tenant//user/` [#84371][#84371]. -- The HTTP endpoints under the `/api/v2` prefix will now accept cookie-based authentication similar to other HTTP endpoints used by the [DB Console](https://www.cockroachlabs.com/docs/v22.1/ui-overview). The encoded session must be in a cookie named `"session"`, and the `"X-Cockroach-API-Session"` header is required to be set to `"cookie"` for the session to be read from the cookie header. A cookie provided without the custom header present will be ignored. [#85553][#85553] - -

General changes

- -- Upgraded `cloud.google.com/go/storage` from v18.2.0 to v1.21.0 to allow for injection of custom retry logic in the [SDK](https://cloud.google.com/sdk). [#85763][#85763] - -

SQL language changes

- -- Removed the `DatabaseID` field from the sampled query telemetry log due to its potential to cause indefinite blocking in the case of a lease acquisition failure. [#85026][#85026] -- The structured payloads used for telemetry logs now include the following new fields: - - - `MaxFullScanRowsEstimate`: The maximum number of rows scanned by a full scan, as estimated by the optimizer. - - `TotalScanRowsEstimate`: The total number of rows read by all scans in a query, as estimated by the optimizer. - - `OutputRowsEstimate`: The number of rows output by a query, as estimated by the optimizer. - - `StatsAvailable`: Whether table statistics were available to the optimizer when planning a query. - - `NanosSinceStatsCollected`: The maximum number of nanoseconds that have passed since stats were collected on any table scanned by a query. - - `BytesRead`: The number of bytes read from disk. - - `RowsRead`: The number of rows read from disk. - - `RowsWritten`: The number of rows written. - - `InnerJoinCount`: The number of inner joins in the query plan. - - `LeftOuterJoinCount`: The number of left (or right) outer joins in the query plan. - - `FullOuterJoinCount`: The number of full outer joins in the query plan. - - `SemiJoinCount`: The number of semi joins in the query plan. - - `AntiJoinCount`: The number of anti joins in the query plan. - - `IntersectAllJoinCount`: The number of intersect all joins in the query plan. - - `ExceptAllJoinCount`: The number of except all joins in the query plan. - - `HashJoinCount`: The number of hash joins in the query plan. - - `CrossJoinCount`: The number of cross joins in the query plan. - - `IndexJoinCount`: The number of index joins in the query plan. - - `LookupJoinCount`: The number of lookup joins in the query plan. - - `MergeJoinCount`: The number of merge joins in the query plan. - - `InvertedJoinCount`: The number of inverted joins in the query plan. - - `ApplyJoinCount`: The number of apply joins in the query plan. - - `ZigZagJoinCount`: The number of zig zag joins in the query plan. [#85337][#85337] [#85743][#85743] - -

Operational changes

- -- Telemetry logs will now display more finely redacted error messages from SQL execution. Previously, the entire error string was fully redacted. [#85403][#85403] - -

Command-line changes

- -- The CLI now contains a flag (`--log-config-vars`) that allows for environment variables to be specified for expansion within the logging configuration file. This allows a single logging configuration file to service an array of sinks without further manipulation of the configuration file. [#85171][#85171] - -

API endpoint changes

- -- A new `/api/v2/sql/` endpoint enables execution of simple SQL queries over HTTP. [#84374][#84374] - -

Bug fixes

- -- Fixed an issue with incorrect start time position of selected time range on the [Metrics page](https://www.cockroachlabs.com/docs/v22.1/ui-overview#metrics). [#85835][#85835] -- Fixed an issue where the [`information_schema`](https://www.cockroachlabs.com/docs/v22.1/information-schema) and [`SHOW GRANTS`](https://www.cockroachlabs.com/docs/v22.1/show-grants) command did not report that object owners have permission to `GRANT` privileges on that object. [#84918][#84918] -- Fixed an issue where imports and rebalances were being slowed down due to the accumulation of empty directories from range snapshot applications. [#84223][#84223] -- The v22.1 upgrade migration `21.2-56: populate RangeAppliedState.RaftAppliedIndexTerm for all ranges` is now more resilient to failures. This migration must be applied across all ranges and replicas in the system, and can fail with `operation "wait for Migrate application" timed out` if any replicas are temporarily unavailable, which is increasingly likely to happen in large clusters with many ranges. Previously, this would restart the migration from the start. [#84909][#84909] -- Fixed a bug where using `CREATE SCHEDULE` in a mixed version cluster could prevent the scheduled job from actually running because of incorrectly writing a lock file. [#84372][#84372] -- Previously, [restoring from backups](https://www.cockroachlabs.com/docs/v22.1/backup-and-restore-overview) on mixed-version clusters that had not yet upgraded to v22.1 could fail with `cannot use bulkio.restore_at_current_time.enabled until version MVCCAddSSTable`. Restores now fall back to the v21.2 behavior instead of erroring in this scenario. [#84641][#84641] -- Fixed incorrect error handling that could cause casts to OID types to fail in some cases. [#85124][#85124] -- Fixed a bug where the privileges for an object owner would not be correctly transferred when the owner was changed. [#85083][#85083] -- The `crdb_internal.deserialize_session` built-in function no longer causes an error when handling an empty prepared statement. [#85122][#85122] -- Fixed a bug introduced in v20.2 that could cause a panic when an expression contained a geospatial comparison like `~` that was negated. [#84630][#84630] -- Fixed a bug where new leaseholders with a `VOTER_INCOMING` type would not always be detected properly during query execution, leading to occasional increased tail latencies due to unnecessary internal retries. [#85315][#85315] -- Fixed a bug introduced in v22.1.0 that could cause the optimizer to not use auto-commit for some mutations in multi-region clusters when it should have done so. [#85434][#85434] -- Fixed a bug introduced in v22.1.0 that could cause the optimizer to reject valid bounded staleness queries with the error `unimplemented: cannot use bounded staleness for DISTRIBUTE`. [#85434][#85434] -- Previously, concatenating a UUID with a string would not use the normal string representation of the UUID values. This is now fixed so that, for example, `'eb64afe6-ade7-40ce-8352-4bb5eec39075'::UUID || 'foo'` returns `eb64afe6-ade7-40ce-8352-4bb5eec39075foo` rather than the encoded representation. [#85416][#85416] -- Fixed a bug where CockroachDB could run into an error when a query included a limited reverse scan and some rows needed to be retrieved by `GET` requests. [#85584][#85584] -- Fixed a bug where the SQL execution HTTP endpoint did not properly support queries with multiple result values. [#84374][#84374] -- Fixed a bug where clients could sometimes receive errors due to lease acquisition timeouts of the form `operation "storage.pendingLeaseRequest: requesting lease" timed out after 6s`. [#85428][#85428] -- The [**Statement details**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) page now renders properly for statements where the hex representation of the `fingerprint_id` is less than 16 digits. [#85529][#85529] -- Fixed a bug that could cause union queries to return incorrect results in rare cases. [#85654][#85654] -- Fixed a bug that could cause upgrades to fail if there was a table with a computed column that used a cast from [`TIMESTAMPTZ`](https://www.cockroachlabs.com/docs/v22.1/timestamp) to [`STRING`](https://www.cockroachlabs.com/docs/v22.1/string). [#85779][#85779] -- Fixed a bug that could cause a panic in rare cases when the unnest() [function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) was used with a tuple return type. [#85349][#85349] -- Fixed an issue where the `NO_INDEX_JOIN` hint could be ignored by the optimizer in some cases, causing it to create a query plan with an index join. [#85917][#85917] -- Fixed a bug where changefeed jobs undergoing catch-up scans could fail with the error `expected provisional value for intent with ts X, found Y`. [#86117][#86117] -- Previously, an empty column in the input to `COPY ... FROM CSV` would be treated as an empty string. Now, this is treated as `NULL`. The quoted empty string can still be used to input an empty string. Similarly, if a different `NULL` token is specified in the command options, it can be quoted in order to be treated as the equivalent string value. [#85926][#85926] -- Fixed a bug where attempting to select data from a table that had different partitioning columns used for the primary and secondary indexes could cause an error. This occurred if the primary index had zone configurations applied to the index partitions with different regions for different partitions, and the secondary index had a different column type than the primary index for its partitioning column(s). [#86218][#86218] - -

Performance improvements

- -- Previously, if there was sudden increase in the volume of pending MVCC GC work, there was an impact on foreground latencies. These sudden increases commonly occurred when: - - - `gc.ttlseconds` was reduced dramatically over tables/indexes that accrue a lot of MVCC garbage, - - a paused backup job from more than one day ago was canceled or failed, or - - a backup job that started more than one day ago just finished. - -An indicator of a large increase in the volume of pending MVCC GC work is a steep climb in the **GC Queue** graph on the **Metrics** page of the [DB Console](https://www.cockroachlabs.com/docs/v22.1/ui-overview). With this fix, the effect on foreground latencies as a result of this sudden build is reduced. [#85899][#85899] - -

Contributors

- -This release includes 74 merged PRs by 37 authors. - -[#84223]: https://github.com/cockroachdb/cockroach/pull/84223 -[#84371]: https://github.com/cockroachdb/cockroach/pull/84371 -[#84372]: https://github.com/cockroachdb/cockroach/pull/84372 -[#84374]: https://github.com/cockroachdb/cockroach/pull/84374 -[#84630]: https://github.com/cockroachdb/cockroach/pull/84630 -[#84641]: https://github.com/cockroachdb/cockroach/pull/84641 -[#84909]: https://github.com/cockroachdb/cockroach/pull/84909 -[#84918]: https://github.com/cockroachdb/cockroach/pull/84918 -[#85026]: https://github.com/cockroachdb/cockroach/pull/85026 -[#85083]: https://github.com/cockroachdb/cockroach/pull/85083 -[#85122]: https://github.com/cockroachdb/cockroach/pull/85122 -[#85124]: https://github.com/cockroachdb/cockroach/pull/85124 -[#85152]: https://github.com/cockroachdb/cockroach/pull/85152 -[#85171]: https://github.com/cockroachdb/cockroach/pull/85171 -[#85315]: https://github.com/cockroachdb/cockroach/pull/85315 -[#85320]: https://github.com/cockroachdb/cockroach/pull/85320 -[#85337]: https://github.com/cockroachdb/cockroach/pull/85337 -[#85349]: https://github.com/cockroachdb/cockroach/pull/85349 -[#85403]: https://github.com/cockroachdb/cockroach/pull/85403 -[#85416]: https://github.com/cockroachdb/cockroach/pull/85416 -[#85428]: https://github.com/cockroachdb/cockroach/pull/85428 -[#85434]: https://github.com/cockroachdb/cockroach/pull/85434 -[#85529]: https://github.com/cockroachdb/cockroach/pull/85529 -[#85553]: https://github.com/cockroachdb/cockroach/pull/85553 -[#85584]: https://github.com/cockroachdb/cockroach/pull/85584 -[#85654]: https://github.com/cockroachdb/cockroach/pull/85654 -[#85743]: https://github.com/cockroachdb/cockroach/pull/85743 -[#85763]: https://github.com/cockroachdb/cockroach/pull/85763 -[#85779]: https://github.com/cockroachdb/cockroach/pull/85779 -[#85835]: https://github.com/cockroachdb/cockroach/pull/85835 -[#85899]: https://github.com/cockroachdb/cockroach/pull/85899 -[#85917]: https://github.com/cockroachdb/cockroach/pull/85917 -[#85926]: https://github.com/cockroachdb/cockroach/pull/85926 -[#86117]: https://github.com/cockroachdb/cockroach/pull/86117 -[#86218]: https://github.com/cockroachdb/cockroach/pull/86218 -[4b6f93b7b]: https://github.com/cockroachdb/cockroach/commit/4b6f93b7b diff --git a/src/current/_includes/releases/v22.1/v22.1.7.md b/src/current/_includes/releases/v22.1/v22.1.7.md deleted file mode 100644 index c2e3902d92d..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.7.md +++ /dev/null @@ -1,105 +0,0 @@ -## v22.1.7 - -Release Date: September 15, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

{{site.data.products.enterprise}} edition changes

- -- The new `kv.rangefeed.range_stuck_threshold` (default `0`, i.e., disabled) [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) instructs rangefeed clients (used internally by [changefeeds](https://www.cockroachlabs.com/docs/v22.1/create-and-configure-changefeeds)) to restart automatically if no checkpoint or other event has been received from the server for some time. This is a defense-in-depth mechanism which will log output as follows if triggered: `restarting stuck rangefeed: waiting for r100 (n1,s1):1 [threshold 1m]: rangefeed restarting due to inactivity`. [#87253][#87253] - -

SQL language changes

- -- Added a new [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `sql.stats.response.show_internal` (default: `false`) that can be set to `true` to display information about internal statistics on the [SQL Activity page](https://www.cockroachlabs.com/docs/v22.1/ui-sql-dashboard), with fingerprint option. [#86869][#86869] -- [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v22.1/explain-analyze) output now contains a warning when the estimated row count for scans is inaccurate. It includes a hint to collect the table [statistics](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer#table-statistics) manually. [#86871][#86871] -- CockroachDB allows mismatched type numbers in `PREPARE` statements. [#87161][#87161] -- Decreased the cardinality of the number on `__moreN__` when replacing literals. [#87269][#87269] -- The structured payloads used for telemetry logs now include the new `Regions` field which indicates the [regions](https://www.cockroachlabs.com/docs/v22.1/multiregion-overview#database-regions) of the nodes where SQL processing ran for the query. [#87466][#87466] -- Added the schema name to [index](https://www.cockroachlabs.com/docs/v22.1/indexes) usage statistics telemetry. [#87624][#87624] -- Added a creation timestamp to [index](https://www.cockroachlabs.com/docs/v22.1/indexes) usage statistics telemetry. [#87624][#87624] - -

Command-line changes

- -- The `\c` metacommand in the [`cockroach sql`](https://www.cockroachlabs.com/docs/v22.1/cockroach-sql) shell no longer shows the password in plaintext. [#87548][#87548] - -

DB Console changes

- -- The plan table on the **Explain Plans** tab of the [Statement Details](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) page now displays the plan gist instead of plan ID. Also added the plan gist as the first line on the actual **Explain Plans** display. [#86872][#86872] -- Added new **Last Execution Time** column to the statements table on the [**Statements** page](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page). This column is hidden by default. [#87077][#87077] -- Added **Transaction Fingerprint ID** and **Statement Fingerprint ID** columns to the corresponding [**SQL Activity**](https://www.cockroachlabs.com/docs/v22.1/ui-sql-dashboard) overview pages. These columns are hidden by default. [#87100][#87100] -- Properly formatted the **Execution Count** under the [**Statement Fingerprints**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-fingerprint-page) page. Increased the timeout for the **Statement Fingerprints** page so it shows a proper timeout error when it happens, no longer crashing the page. [#87209][#87209] - -

Bug fixes

- -- Fixed a vulnerability in the [optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) that could cause a panic in rare cases when planning complex queries with [`ORDER BY`](https://www.cockroachlabs.com/docs/v22.1/order-by). [#86804][#86804] -- Previously, queries with many [joins](https://www.cockroachlabs.com/docs/v22.1/joins) and projections of multi-column expressions (e.g., `col1 + col2`), either present in the query or within a [virtual column](https://www.cockroachlabs.com/docs/v22.1/computed-columns) definition, could experience very long optimization times or hangs, where the query is never sent for execution. This is now fixed by adding the `disable_hoist_projection_in_join_limitation` [session flag](https://www.cockroachlabs.com/docs/v22.1/set-vars#supported-variables). [#85871][#85871] -- Fixed a crash/panic that could occur if placeholder arguments were used with the `with_min_timestamp` or `with_max_staleness` [functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators). [#86881][#86881] -- Fixed a crash that could happen when formatting queries that have placeholder `BitArray` arguments. [#86885][#86885] -- CockroachDB now more precisely respects the `distsql_workmem` [setting](https://www.cockroachlabs.com/docs/v22.1/set-vars#supported-variables) which improves the stability of each node and makes OOMs less likely. [#86916][#86916] -- Previously, escaping a double quote (`"`) with [`COPY`](https://www.cockroachlabs.com/docs/v22.1/copy-from) in `CSV` mode could ignore all subsequent lines in the same `COPY` if an `ESCAPE` clause were specified. This is now resolved. [#86977][#86977] -- Fixed a bug that caused some special characters to be misread if they were being read by [`COPY ... FROM`](https://www.cockroachlabs.com/docs/v22.1/copy-from) into a `TEXT[]` column. [#86887][#86887] -- Timescale object is now properly constructed from session storage, preventing bugs and crashes in pages that use the timescale object when reloading the page. [#86975][#86975] -- Previously, CockroachDB would return an internal error when evaluating the `json_build_object` [built-in](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) when an [enum](https://www.cockroachlabs.com/docs/v22.1/enum) or a void datum were passed as the first argument. This is now fixed. [#86851][#86851] -- The statement tag for the [`SHOW`](https://www.cockroachlabs.com/docs/v22.1/show-vars) command results in the pgwire protocol no longer containing the number of returned rows. [#87126][#87126] -- Fixed a bug where the options given to the [`BEGIN TRANSACTION`](https://www.cockroachlabs.com/docs/v22.1/begin-transaction) command would be ignored if the `BEGIN` was a prepared statement. [#87126][#87126] -- Fixed a bug that caused internal errors like "unable to [vectorize](https://www.cockroachlabs.com/docs/v22.1/vectorized-execution) execution plan: unhandled expression type" in rare cases. [#87182][#87182] -- The **Explain Plans** tab inside the [Statement Fingerprints](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-fingerprint-page) page now groups plans that have the same shape but a different number of spans in corresponding scans. [#87211][#87211] -- A bug in the column backfiller, which is used to add or remove columns from tables, failed to account for the need to read [virtual columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns) which were part of a [primary key](https://www.cockroachlabs.com/docs/v22.1/primary-key). [Hash-sharded](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) [indexes](https://www.cockroachlabs.com/docs/v22.1/indexes), starting in v22.1, use [virtual columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns). Any [hash-sharded](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) table created in v22.1 or any table created with a virtual column as part of its primary key would indefinitely fail to complete a [schema change](https://www.cockroachlabs.com/docs/v22.1/online-schema-changes) which adds or removes columns. This bug has been fixed. [#87272][#87272] -- Added a missing memory accounting call when appending a KV to the underlying `kvBuf`. [#87118][#87118] -- Some [upgrade](https://www.cockroachlabs.com/docs/v22.1/upgrade-cockroach-version) migrations perform [schema changes](https://www.cockroachlabs.com/docs/v22.1/online-schema-changes) on system tables. Those upgrades which added [indexes](https://www.cockroachlabs.com/docs/v22.1/indexes) could, in some cases, get caught retrying because they failed to detect that the migration had already occurred due to the existence of a populated field. When that happens, the finalization of the new version could hang indefinitely and require manual intervention. This bug has been fixed. [#87633][#87633] -- Fixed a bug that led to the `querySummary` field in the `crdb_internal.statements_statistics`' metadata column being empty. [#87618][#87618] -- Previously, the `querySummary` metadata field in the `crdb_internal.statement_statistics` table was inconsistent with the query metadata field for executed prepared statements. These fields are now consistent for prepared statements. [#87618][#87618] -- Fixed a rare bug where errors could occur related to the use of [arrays](https://www.cockroachlabs.com/docs/v22.1/array) of [enums](https://www.cockroachlabs.com/docs/v22.1/enum). [#85961][#85961] -- Fixed a bug that would result in a failed cluster [restore](https://www.cockroachlabs.com/docs/v22.1/restore). [#87764][#87764] -- Fixed a misused query optimization involving tables with one or more [`PARTITION BY`](https://www.cockroachlabs.com/docs/v22.1/partition-by) clauses and partition [zone constraints](https://www.cockroachlabs.com/docs/v22.1/configure-replication-zones) which assign [region locality](https://www.cockroachlabs.com/docs/v22.1/set-locality) to those partitions. In some cases the [optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) picks a ['locality-optimized search'](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer#locality-optimized-search-in-multi-region-clusters) query plan which is not truly locality-optimized, and has higher latency than competing query plans which use distributed scan. Locality-optimized search is now avoided in cases which are known not to benefit from this optimization. [#87848][#87848] - -

Performance improvements

- -- Planning time has been reduced for queries over tables with a large number of columns and/or [indexes](https://www.cockroachlabs.com/docs/v22.1/indexes). [#86749][#86749] -- Long-running SQL sessions are now less likely to maintain large allocations for long periods of time, which decreases the risk of OOM and improves memory utilization. [#86797][#86797] - -

Build changes

- -- Fixed OSS builds that did not have CCL-licensed UI intermediates lingering on-disk. [#86425][#86425] - -

Contributors

- -This release includes 84 merged PRs by 37 authors. - -[#85871]: https://github.com/cockroachdb/cockroach/pull/85871 -[#85961]: https://github.com/cockroachdb/cockroach/pull/85961 -[#86425]: https://github.com/cockroachdb/cockroach/pull/86425 -[#86428]: https://github.com/cockroachdb/cockroach/pull/86428 -[#86749]: https://github.com/cockroachdb/cockroach/pull/86749 -[#86797]: https://github.com/cockroachdb/cockroach/pull/86797 -[#86804]: https://github.com/cockroachdb/cockroach/pull/86804 -[#86851]: https://github.com/cockroachdb/cockroach/pull/86851 -[#86869]: https://github.com/cockroachdb/cockroach/pull/86869 -[#86871]: https://github.com/cockroachdb/cockroach/pull/86871 -[#86872]: https://github.com/cockroachdb/cockroach/pull/86872 -[#86881]: https://github.com/cockroachdb/cockroach/pull/86881 -[#86885]: https://github.com/cockroachdb/cockroach/pull/86885 -[#86887]: https://github.com/cockroachdb/cockroach/pull/86887 -[#86916]: https://github.com/cockroachdb/cockroach/pull/86916 -[#86975]: https://github.com/cockroachdb/cockroach/pull/86975 -[#86977]: https://github.com/cockroachdb/cockroach/pull/86977 -[#87059]: https://github.com/cockroachdb/cockroach/pull/87059 -[#87077]: https://github.com/cockroachdb/cockroach/pull/87077 -[#87100]: https://github.com/cockroachdb/cockroach/pull/87100 -[#87118]: https://github.com/cockroachdb/cockroach/pull/87118 -[#87126]: https://github.com/cockroachdb/cockroach/pull/87126 -[#87127]: https://github.com/cockroachdb/cockroach/pull/87127 -[#87161]: https://github.com/cockroachdb/cockroach/pull/87161 -[#87182]: https://github.com/cockroachdb/cockroach/pull/87182 -[#87209]: https://github.com/cockroachdb/cockroach/pull/87209 -[#87211]: https://github.com/cockroachdb/cockroach/pull/87211 -[#87253]: https://github.com/cockroachdb/cockroach/pull/87253 -[#87269]: https://github.com/cockroachdb/cockroach/pull/87269 -[#87272]: https://github.com/cockroachdb/cockroach/pull/87272 -[#87466]: https://github.com/cockroachdb/cockroach/pull/87466 -[#87548]: https://github.com/cockroachdb/cockroach/pull/87548 -[#87618]: https://github.com/cockroachdb/cockroach/pull/87618 -[#87624]: https://github.com/cockroachdb/cockroach/pull/87624 -[#87633]: https://github.com/cockroachdb/cockroach/pull/87633 -[#87764]: https://github.com/cockroachdb/cockroach/pull/87764 -[#87848]: https://github.com/cockroachdb/cockroach/pull/87848 diff --git a/src/current/_includes/releases/v22.1/v22.1.8.md b/src/current/_includes/releases/v22.1/v22.1.8.md deleted file mode 100644 index 5a661ab5d29..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.8.md +++ /dev/null @@ -1,73 +0,0 @@ -## v22.1.8 - -Release Date: September 29, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

SQL language changes

- -- For pgwire-level prepared statements, CockroachDB now supports the case where the number of the type hints is greater than the number of placeholders in a given query. [#88145][#88145] -- The index of a placeholder is now replaced to always be `$1` to limit fingerprint creations. [#88364][#88364] -- Changed the default value of `sql.metrics.statement_details.plan_collection.enabled` to `false`, as this information is no longer used. [#88420][#88420] - -

Operational changes

- -- Reduced the length of the `raft.process.handleready.latency` metric help text to avoid it being rejected by certain Prometheus services. [#88147][#88147] - -

DB Console changes

- -- Changed the column name from `Users` to `User Name` on the **Databases** > **Tables** page, when viewing Grants. [#87857][#87857] -- Fixed the index and grant sorting on the **Databases** page to have a default column, and to update the URL to match the selected item. [#87862][#87862] -- Added "Application Name" to the **SQL Activity** > **Statements**, **Transaction Overview** (and their respective column selectors), and **Transaction Details** pages, and updated the label from "App" to "Application Name" on the **Statement Details** page. [#87874][#87874] -- On the **SQL Activity** "Session Details" page, the "Most Recent Statement" column now shows the last active query instead of "No Active Statement". [#88055][#88055] - -

Bug fixes

- -- Previously, an active replication report update could prevent a node from shutting down until it completed. Now, the report update is cancelled on node shutdown instead. [#87924][#87924] -- Fixed a bug with [`LOOKUP`](https://www.cockroachlabs.com/docs/v22.2/joins#lookup-joins) join selectivity estimation when using [hash-sharded indexes](https://www.cockroachlabs.com/docs/v22.2/hash-sharded-indexes), which could cause `LOOKUP` joins to be selected by the optimizer in cases where other join methods are less expensive. [#87390][#87390] -- Fixed incorrect results from queries which utilize [locality](https://www.cockroachlabs.com/docs/v22.2/cockroach-start#locality)-optimized search on the inverted index of a table with `REGIONAL BY ROW` partitioning. [#88113][#88113] -- The `current_setting` [built-in function](https://www.cockroachlabs.com/docs/v22.2/functions-and-operators) no longer results in an error when checking a custom [session setting](https://www.cockroachlabs.com/docs/v22.2/set-vars) that does not exist when the `missing_ok` argument is set to `true`. [#88161][#88161] -- When a CockroachDB node is being [drained](https://www.cockroachlabs.com/docs/v22.2/node-shutdown#drain-a-node-manually), all queries that are still running on that node are now forcefully canceled after waiting for the specified `server.shutdown.query_wait` period if the newly-added cluster setting `sql.distsql.drain.cancel_after_wait.enabled` is set to `true` (it is `false` by default). [#88150][#88150] -- Previously, CockroachDB could incorrectly fail to fetch rows with `NULL` values when reading from the unique secondary index when multiple [column families](https://www.cockroachlabs.com/docs/v22.2/column-families) are defined for the table and the index doesn't store some of the `NOT NULL` columns. [#88209][#88209] -- CockroachDB now more promptly reacts to query cancellations (e.g., due to statement timeout being exceeded) after the query [spills to disk](https://www.cockroachlabs.com/docs/v22.2/vectorized-execution#disk-spilling-operations). [#88394][#88394] -- Fixed a bug existing since before v21.1 that could cause an internal error when executing a query with `LIMIT` ordering on the output of a [window function](https://www.cockroachlabs.com/docs/v22.2/window-functions). [#87746][#87746] -- CockroachDB no longer fetches unnecessary rows for queries with specified `LIMIT`s. The bug was introduced in v22.1.7. [#88421][#88421] -- Prometheus histograms were incorrectly omitting buckets whose cumulative count matched the preceding bucket. This would lead to erroneous results when operating on histogram sums. [#88331][#88331] -- Completed [statement diagnostics bundles](https://www.cockroachlabs.com/docs/v22.2/explain-analyze#debug-option) now persist in the DB Console, and can been seen on the **Statement Diagnostics History** page, under **Advanced Debug**. [#88390][#88390] -- Dropping temporary tables and sequences now properly checks a user's privileges. [#88360][#88360] -- The pgwire `DESCRIBE` step no longer fails with an error while attempting to look up cursors declared with names containing special characters. [#88413][#88413] -- Fixed a bug in [`BACKUP`](https://www.cockroachlabs.com/docs/v22.2/backup) where spans for views were being backed up. Because ranges are not split at view boundaries, this can cause the backup to send export requests to ranges that do not belong to any backup target. [#86681][#86681] -- Fixed a bug where if telemetry is enabled, [`COPY`](https://www.cockroachlabs.com/docs/v22.2/copy-from) could sometimes cause the server to crash. [#88325][#88325] -- Fixed a rare internal error that could occur during planning when a predicate included values close to the maximum or minimum `int64` value. The error, `estimated row count must be non-zero`, is now fixed. [#88533][#88533] -- Adjusted sending and receiving Raft queue sizes to match. Previously the receiver could unnecessarily drop messages in situations when the sending queue is bigger than the receiving one. [#88448][#88448] - -

Contributors

- -This release includes 40 merged PRs by 23 authors. - -[#86681]: https://github.com/cockroachdb/cockroach/pull/86681 -[#87390]: https://github.com/cockroachdb/cockroach/pull/87390 -[#87746]: https://github.com/cockroachdb/cockroach/pull/87746 -[#87857]: https://github.com/cockroachdb/cockroach/pull/87857 -[#87862]: https://github.com/cockroachdb/cockroach/pull/87862 -[#87874]: https://github.com/cockroachdb/cockroach/pull/87874 -[#87924]: https://github.com/cockroachdb/cockroach/pull/87924 -[#87935]: https://github.com/cockroachdb/cockroach/pull/87935 -[#88055]: https://github.com/cockroachdb/cockroach/pull/88055 -[#88113]: https://github.com/cockroachdb/cockroach/pull/88113 -[#88145]: https://github.com/cockroachdb/cockroach/pull/88145 -[#88147]: https://github.com/cockroachdb/cockroach/pull/88147 -[#88150]: https://github.com/cockroachdb/cockroach/pull/88150 -[#88161]: https://github.com/cockroachdb/cockroach/pull/88161 -[#88209]: https://github.com/cockroachdb/cockroach/pull/88209 -[#88325]: https://github.com/cockroachdb/cockroach/pull/88325 -[#88331]: https://github.com/cockroachdb/cockroach/pull/88331 -[#88360]: https://github.com/cockroachdb/cockroach/pull/88360 -[#88364]: https://github.com/cockroachdb/cockroach/pull/88364 -[#88390]: https://github.com/cockroachdb/cockroach/pull/88390 -[#88394]: https://github.com/cockroachdb/cockroach/pull/88394 -[#88413]: https://github.com/cockroachdb/cockroach/pull/88413 -[#88420]: https://github.com/cockroachdb/cockroach/pull/88420 -[#88421]: https://github.com/cockroachdb/cockroach/pull/88421 -[#88448]: https://github.com/cockroachdb/cockroach/pull/88448 -[#88533]: https://github.com/cockroachdb/cockroach/pull/88533 diff --git a/src/current/_includes/releases/v22.1/v22.1.9.md b/src/current/_includes/releases/v22.1/v22.1.9.md deleted file mode 100644 index c93e393a118..00000000000 --- a/src/current/_includes/releases/v22.1/v22.1.9.md +++ /dev/null @@ -1,99 +0,0 @@ -## v22.1.9 - -Release Date: October 17, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Security updates

- -- The following types of data are now considered "safe" for reporting from within `debug.zip`: - - - Range start/end keys, which can include data from any indexed SQL column. - - Key spans, which can include data from any indexed SQL column. - - Usernames and role names. - - SQL object names (including names of databases, schemas, tables, sequences, views, types, and UDFs. - - [#88739][#88739] - -

SQL language changes

- -- The new cluster setting `sql.metrics.statement_details.gateway_node.enabled` controls whether the gateway node ID is persisted to the `system.statement_statistics` table as-is or as a `0` to decrease cardinality on the table. The node ID is still available on the statistics column. [#88634][#88634] - -

Operational changes

- -- The new cluster setting `kv.mvcc_gc.queue_interval` controls how long the MVCC garbage collection queue waits between processing replicas. The previous value of `1s` is the new default. A large volume of MVCC garbage collection work can disrupt foreground traffic. [#89430][#89430] - -

Command-line changes

- -- The new `--redact` flag of the `debug zip` command triggers redaction of all sensitive data in debug zip bundles, except for range keys. The `--redact-logs` flag will be deprecated in v22.2. [#88739][#88739] - -

Bug fixes

- -- Fixed a bug introduced in v22.1.7 that could cause an internal panic when a query ordering contained redundant ordering columns. [#88480][#88480] -- Fixed a bug that could cause nodes to crash when executing apply-joins in query plans. [#88513][#88513] -- Fixed a bug introduced in v21.2.0 that could cause errors when executing queries with correlated `WITH` expressions. [#88513][#88513] -- Fixed a longstanding bug that could cause the optimizer to produce an incorrect plan when aggregate functions `st_makeline` or `st_extent` were called with invalid-type and empty inputs respectively. [#88952][#88952] -- Fixed unintended recordings of index reads caused by internal executor/queries. [#88943][#88943] -- Fixed a bug with capturing index usage statistics for database names with hyphens [#88999][#88999] -- Fixed a bug that caused incorrect evaluation of expressions in the form `col +/- const1 ? const2`, where `const1` and `const2` are constant values and `?` is any comparison operator. The bug was caused by operator overflow when the optimizer attempted to simplify these expressions to have a single constant value. [#88970][#88970] -- Fixed a bug where the `system.replication_constraint_stats` table did not show erroneous voter constraint violations when `num_voters` was configured. [#88662][#88662] -- Fixed a bug that caused incorrect results from the floor division operator, `//`, when the numerator was non-constant and the denominator was the constant 1. [#89263][#89263] -- Fixed a bug introduced in v2.1.0 that could cause queries containing a subquery with an `EXCEPT` clause to produce incorrect results. This could happen if the optimizer could guarantee that the left side of the `EXCEPT` clause always returned more rows than the right side. In this case, the optimizer made an incorrect assumption that the `EXCEPT` subquery always returned at least one row, which could cause the optimizer to perform an invalid transformation, leading to the potential for incorrect results in the full query result. [#89134][#89134] -- Fixed a bug that prevented saving of a statement bundle that was collected for a query that resulted in a `statement_timeout` error. [#89126][#89126] -- Fixed a longstanding bug that could cause a panic when running a query with an `EXPLAIN` clause that attempts to order on a non-output column. [#88686][#88686] -- Fixed a bug introduced in v22.1.0 that could cause incorrect results in a narrow circumstance:
  1. A query with ORDER BY and LIMIT is executed.
  2. The table that contains the ORDER BY columns has an index containing that contains those columns.
  3. The index contains a prefix of columns held to a fixed number of values by the query filter, such as WHERE a IN (1, 3).
  4. A CHECK constraint (such as CHECK (a IN (1, 3))) is inferred by either:
    • A computed column expression (such as WHERE a IN (1, 3) and a column b INT AS (a + 10) STORED).
    • A PARTITION BY clause (such as INDEX (a, ...) PARTITION BY LIST (a) (PARTITION p VALUES ((1), (3)))).
[#89281][#89281] -- The WAL is now flushed when writing storage checkpoints on consistency checker failures [#89402][#89402] -- Fixed a bug that could cause a restore operation to fail with a spurious error. [#89443][#89443] -- Fixed a bug that caused changefeeds to be permanently in a "failed to send RPC" state. [#87804][#87804] -- Improved optimizer selectivity and cost estimates of zigzag joins to prevent query plans from using the optimizer when many rows are qualified). [#89460][#89460] -- A `VOTER_DEMOTING_LEARNER` can acquire the lease in a joint configuration only when there is a `VOTOR_INCOMING` in the configuration and the `VOTER_DEMOTING_LEARNER` was the last leaseholder. This prevents a situation in which the system is unable to exit the joint configuration. [#89611][#89611] -- Fixed a bug introduced in v22.1.0 that cause CockroachDB to crash that could occur when dropping a role that owned two schemas with the same name in different databases. [#89538][#89538] -- Fixed a bug in `pg_catalog` tables which could result in an internal error if a schema is concurrently dropped. [#88600][#88600] -- Refined a check conducted during restore that ensures that all previously-offline tables were properly introduced. [#89688][#89688] -- Fixed a bug in v22.1.0 to v22.1.8 that could cause a query with ORDER BY and LIMIT clauses to return incorrect results if it scanned a multi-column index containing the `ORDER BY` columns, and a prefix of the index columns was held fixed to two or more constant values by the query filter or schema. [#88488][88488] - -

Performance improvements

- -- HTTP requests with `Accept-encoding: gzip` previously resulted in valid gzip-encoded but uncompressed responses. This resulted in inefficient HTTP transfer times. Those responses are now properly compressed, resulting in smaller network responses. [#89513][#89513] - -

Miscellaneous

- -- The SQL proxy now validates the tenant certificate's common name and organization, in addition to its DNS name. The DNS name for a Kubernetes pod is the pod's IP address, and IP addresses are reused by the cluster. [#89677][#89677] - -- Reverted a fix for a bug that caused histograms to incorrectly omit buckets whose cumulative count matched the preceding bucket. The fix led to a significant increase in memory usage on clusters with Prometheus or OpenTelemetry collector instances. [#89532][#89532] - -

Contributors

- -This release includes 58 merged PRs by 37 authors. - -[#87804]: https://github.com/cockroachdb/cockroach/pull/87804 -[#88480]: https://github.com/cockroachdb/cockroach/pull/88480 -[#88488]: https://github.com/cockroachdb/cockroach/pull/88488 -[#88513]: https://github.com/cockroachdb/cockroach/pull/88513 -[#88600]: https://github.com/cockroachdb/cockroach/pull/88600 -[#88634]: https://github.com/cockroachdb/cockroach/pull/88634 -[#88662]: https://github.com/cockroachdb/cockroach/pull/88662 -[#88686]: https://github.com/cockroachdb/cockroach/pull/88686 -[#88739]: https://github.com/cockroachdb/cockroach/pull/88739 -[#88759]: https://github.com/cockroachdb/cockroach/pull/88759 -[#88943]: https://github.com/cockroachdb/cockroach/pull/88943 -[#88952]: https://github.com/cockroachdb/cockroach/pull/88952 -[#88970]: https://github.com/cockroachdb/cockroach/pull/88970 -[#88999]: https://github.com/cockroachdb/cockroach/pull/88999 -[#89126]: https://github.com/cockroachdb/cockroach/pull/89126 -[#89134]: https://github.com/cockroachdb/cockroach/pull/89134 -[#89194]: https://github.com/cockroachdb/cockroach/pull/89194 -[#89263]: https://github.com/cockroachdb/cockroach/pull/89263 -[#89281]: https://github.com/cockroachdb/cockroach/pull/89281 -[#89402]: https://github.com/cockroachdb/cockroach/pull/89402 -[#89430]: https://github.com/cockroachdb/cockroach/pull/89430 -[#89443]: https://github.com/cockroachdb/cockroach/pull/89443 -[#89460]: https://github.com/cockroachdb/cockroach/pull/89460 -[#89513]: https://github.com/cockroachdb/cockroach/pull/89513 -[#89532]: https://github.com/cockroachdb/cockroach/pull/89532 -[#89538]: https://github.com/cockroachdb/cockroach/pull/89538 -[#89596]: https://github.com/cockroachdb/cockroach/pull/89596 -[#89611]: https://github.com/cockroachdb/cockroach/pull/89611 -[#89677]: https://github.com/cockroachdb/cockroach/pull/89677 -[#89688]: https://github.com/cockroachdb/cockroach/pull/89688 -[#89019]: https://github.com/cockroachdb/cockroach/pull/89019 diff --git a/src/current/_includes/releases/v22.2/v22.2.16.md b/src/current/_includes/releases/v22.2/v22.2.16.md index c0322a6075d..6e338399cef 100644 --- a/src/current/_includes/releases/v22.2/v22.2.16.md +++ b/src/current/_includes/releases/v22.2/v22.2.16.md @@ -6,7 +6,7 @@ Release Date: November 6, 2023

Bug fixes

-- Fixed a rare internal error in the [optimizer](../v22.1/cost-based-optimizer.html), which could occur while enforcing orderings between SQL operators. This error has existed since before v22.1. [#113640][#113640] +- Fixed a rare internal error in the optimizer, which could occur while enforcing orderings between SQL operators. This error has existed since before v22.1. [#113640][#113640] - Fixed a bug where CockroachDB could incorrectly evaluate [lookup](../v23.2/joins.html#lookup-joins) and index [joins](../v23.2/joins.html) into tables with at least three [column families](../v23.2/column-families.html). This would result in either the `non-nullable column with no value` internal error, or the query would return incorrect results. This bug was introduced in v22.2. [#113694][#113694]

Contributors

diff --git a/src/current/_includes/sidebar-data-v22.1.json b/src/current/_includes/sidebar-data-v22.1.json deleted file mode 100644 index ebf11cbfaae..00000000000 --- a/src/current/_includes/sidebar-data-v22.1.json +++ /dev/null @@ -1,18 +0,0 @@ -[ - { - "title": "Docs Home", - "is_top_level": true, - "urls": [ - "/" - ] - }, - {% include_cached v22.1/sidebar-data/get-started.json %}, - {% include_cached v22.1/sidebar-data/develop.json %}, - {% include_cached v22.1/sidebar-data/deploy.json %}, - {% include_cached v22.1/sidebar-data/manage.json %}, - {% include_cached v22.1/sidebar-data/migrate.json %}, - {% include_cached v22.1/sidebar-data/stream.json %}, - {% include_cached v22.1/sidebar-data/reference.json %}, - {% include_cached v22.1/sidebar-data/releases.json %}, - {% include_cached sidebar-data-cockroach-university.json %} -] diff --git a/src/current/_includes/v22.1/app/before-you-begin.md b/src/current/_includes/v22.1/app/before-you-begin.md deleted file mode 100644 index b271d6ff85c..00000000000 --- a/src/current/_includes/v22.1/app/before-you-begin.md +++ /dev/null @@ -1,12 +0,0 @@ -1. [Install CockroachDB](install-cockroachdb.html). -2. Start up a [secure](secure-a-cluster.html) or [insecure](start-a-local-cluster.html) local cluster. -3. Choose the instructions that correspond to whether your cluster is secure or insecure: - -
- - -
- -
-{% include {{ page.version.version }}/prod-deployment/insecure-flag.md %} -
\ No newline at end of file diff --git a/src/current/_includes/v22.1/app/cc-free-tier-params.md b/src/current/_includes/v22.1/app/cc-free-tier-params.md deleted file mode 100644 index f8a196cdd8e..00000000000 --- a/src/current/_includes/v22.1/app/cc-free-tier-params.md +++ /dev/null @@ -1,10 +0,0 @@ -Where: - -- `{username}` and `{password}` specify the SQL username and password that you created earlier. -- `{globalhost}` is the name of the CockroachDB {{ site.data.products.cloud }} free tier host (e.g., `free-tier.gcp-us-central1.cockroachlabs.cloud`). -- `{path to the CA certificate}` is the path to the `cc-ca.crt` file that you downloaded from the CockroachDB {{ site.data.products.cloud }} Console. -- `{cluster_name}` is the name of your cluster. - -{{site.data.alerts.callout_info}} -If you are using the connection string that you [copied from the **Connection info** modal](#set-up-your-cluster-connection), your username, password, hostname, and cluster name will be pre-populated. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v22.1/app/create-a-database.md b/src/current/_includes/v22.1/app/create-a-database.md deleted file mode 100644 index 468eb93a57f..00000000000 --- a/src/current/_includes/v22.1/app/create-a-database.md +++ /dev/null @@ -1,54 +0,0 @@ -
- -1. In the SQL shell, create the `bank` database that your application will use: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE bank; - ~~~ - -1. Create a SQL user for your app: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER WITH PASSWORD ; - ~~~ - - Take note of the username and password. You will use it in your application code later. - -1. Give the user the necessary permissions: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > GRANT ALL ON DATABASE bank TO ; - ~~~ - -
- -
- -1. If you haven't already, [download the CockroachDB binary](install-cockroachdb.html). -1. Start the [built-in SQL shell](cockroach-sql.html) using the connection string you got from the CockroachDB {{ site.data.products.cloud }} Console: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - --url='' - ~~~ - -1. In the SQL shell, create the `bank` database that your application will use: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE bank; - ~~~ - -1. Exit the SQL shell: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - - -
\ No newline at end of file diff --git a/src/current/_includes/v22.1/app/create-maxroach-user-and-bank-database.md b/src/current/_includes/v22.1/app/create-maxroach-user-and-bank-database.md deleted file mode 100644 index 1e259b96012..00000000000 --- a/src/current/_includes/v22.1/app/create-maxroach-user-and-bank-database.md +++ /dev/null @@ -1,32 +0,0 @@ -Start the [built-in SQL shell](cockroach-sql.html): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs -~~~ - -In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE USER IF NOT EXISTS maxroach; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -Give the `maxroach` user the necessary permissions: - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT ALL ON DATABASE bank TO maxroach; -~~~ - -Exit the SQL shell: - -{% include_cached copy-clipboard.html %} -~~~ sql -> \q -~~~ diff --git a/src/current/_includes/v22.1/app/for-a-complete-example-go.md b/src/current/_includes/v22.1/app/for-a-complete-example-go.md deleted file mode 100644 index 64803f686a9..00000000000 --- a/src/current/_includes/v22.1/app/for-a-complete-example-go.md +++ /dev/null @@ -1,4 +0,0 @@ -For complete examples, see: - -- [Build a Go App with CockroachDB](build-a-go-app-with-cockroachdb.html) (pgx) -- [Build a Go App with CockroachDB and GORM](build-a-go-app-with-cockroachdb.html) diff --git a/src/current/_includes/v22.1/app/for-a-complete-example-java.md b/src/current/_includes/v22.1/app/for-a-complete-example-java.md deleted file mode 100644 index b4c63135ae0..00000000000 --- a/src/current/_includes/v22.1/app/for-a-complete-example-java.md +++ /dev/null @@ -1,4 +0,0 @@ -For complete examples, see: - -- [Build a Java App with CockroachDB](build-a-java-app-with-cockroachdb.html) (JDBC) -- [Build a Java App with CockroachDB and Hibernate](build-a-java-app-with-cockroachdb-hibernate.html) diff --git a/src/current/_includes/v22.1/app/for-a-complete-example-python.md b/src/current/_includes/v22.1/app/for-a-complete-example-python.md deleted file mode 100644 index 5b5d4bec3e9..00000000000 --- a/src/current/_includes/v22.1/app/for-a-complete-example-python.md +++ /dev/null @@ -1,6 +0,0 @@ -For complete examples, see: - -- [Build a Python App with CockroachDB](build-a-python-app-with-cockroachdb-psycopg3.html) (psycopg3) -- [Build a Python App with CockroachDB](build-a-python-app-with-cockroachdb.html) (psycopg2) -- [Build a Python App with CockroachDB and SQLAlchemy](build-a-python-app-with-cockroachdb-sqlalchemy.html) -- [Build a Python App with CockroachDB and Django](build-a-python-app-with-cockroachdb-django.html) diff --git a/src/current/_includes/v22.1/app/hibernate-dialects-note.md b/src/current/_includes/v22.1/app/hibernate-dialects-note.md deleted file mode 100644 index 85f217abd3c..00000000000 --- a/src/current/_includes/v22.1/app/hibernate-dialects-note.md +++ /dev/null @@ -1,5 +0,0 @@ -Versions of the Hibernate CockroachDB dialect correspond to the version of CockroachDB installed on your machine. For example, `org.hibernate.dialect.CockroachDB201Dialect` corresponds to CockroachDB v20.1 and later, and `org.hibernate.dialect.CockroachDB192Dialect` corresponds to CockroachDB v19.2 and later. - -All dialect versions are forward-compatible (e.g., CockroachDB v20.1 is compatible with `CockroachDB192Dialect`), as long as your application is not affected by any backward-incompatible changes listed in your CockroachDB version's [release notes](../releases/index.html). In the event of a CockroachDB version upgrade, using a previous version of the CockroachDB dialect will not break an application, but, to enable all features available in your version of CockroachDB, we recommend keeping the dialect version in sync with the installed version of CockroachDB. - -Not all versions of CockroachDB have a corresponding dialect yet. Use the dialect number that is closest to your installed version of CockroachDB. For example, use `CockroachDB201Dialect` when using CockroachDB v21.1 and later. \ No newline at end of file diff --git a/src/current/_includes/v22.1/app/insecure/create-maxroach-user-and-bank-database.md b/src/current/_includes/v22.1/app/insecure/create-maxroach-user-and-bank-database.md deleted file mode 100644 index 0fff36e7545..00000000000 --- a/src/current/_includes/v22.1/app/insecure/create-maxroach-user-and-bank-database.md +++ /dev/null @@ -1,32 +0,0 @@ -Start the [built-in SQL shell](cockroach-sql.html): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE USER IF NOT EXISTS maxroach; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -Give the `maxroach` user the necessary permissions: - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT ALL ON DATABASE bank TO maxroach; -~~~ - -Exit the SQL shell: - -{% include_cached copy-clipboard.html %} -~~~ sql -> \q -~~~ diff --git a/src/current/_includes/v22.1/app/insecure/jooq-basic-sample/Sample.java b/src/current/_includes/v22.1/app/insecure/jooq-basic-sample/Sample.java deleted file mode 100644 index d1a54a8ddd2..00000000000 --- a/src/current/_includes/v22.1/app/insecure/jooq-basic-sample/Sample.java +++ /dev/null @@ -1,215 +0,0 @@ -package com.cockroachlabs; - -import com.cockroachlabs.example.jooq.db.Tables; -import com.cockroachlabs.example.jooq.db.tables.records.AccountsRecord; -import org.jooq.DSLContext; -import org.jooq.SQLDialect; -import org.jooq.Source; -import org.jooq.conf.RenderQuotedNames; -import org.jooq.conf.Settings; -import org.jooq.exception.DataAccessException; -import org.jooq.impl.DSL; - -import java.io.InputStream; -import java.sql.Connection; -import java.sql.DriverManager; -import java.sql.SQLException; -import java.util.*; -import java.util.concurrent.atomic.AtomicInteger; -import java.util.concurrent.atomic.AtomicLong; -import java.util.function.Function; - -import static com.cockroachlabs.example.jooq.db.Tables.ACCOUNTS; - -public class Sample { - - private static final Random RAND = new Random(); - private static final boolean FORCE_RETRY = false; - private static final String RETRY_SQL_STATE = "40001"; - private static final int MAX_ATTEMPT_COUNT = 6; - - private static Function addAccounts() { - return ctx -> { - long rv = 0; - - ctx.delete(ACCOUNTS).execute(); - ctx.batchInsert( - new AccountsRecord(1L, 1000L), - new AccountsRecord(2L, 250L), - new AccountsRecord(3L, 314159L) - ).execute(); - - rv = 1; - System.out.printf("APP: addAccounts() --> %d\n", rv); - return rv; - }; - } - - private static Function transferFunds(long fromId, long toId, long amount) { - return ctx -> { - long rv = 0; - - AccountsRecord fromAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(fromId)); - AccountsRecord toAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(toId)); - - if (!(amount > fromAccount.getBalance())) { - fromAccount.setBalance(fromAccount.getBalance() - amount); - toAccount.setBalance(toAccount.getBalance() + amount); - - ctx.batchUpdate(fromAccount, toAccount).execute(); - rv = amount; - System.out.printf("APP: transferFunds(%d, %d, %d) --> %d\n", fromId, toId, amount, rv); - } - - return rv; - }; - } - - // Test our retry handling logic if FORCE_RETRY is true. This - // method is only used to test the retry logic. It is not - // intended for production code. - private static Function forceRetryLogic() { - return ctx -> { - long rv = -1; - try { - System.out.printf("APP: testRetryLogic: BEFORE EXCEPTION\n"); - ctx.execute("SELECT crdb_internal.force_retry('1s')"); - } catch (DataAccessException e) { - System.out.printf("APP: testRetryLogic: AFTER EXCEPTION\n"); - throw e; - } - return rv; - }; - } - - private static Function getAccountBalance(long id) { - return ctx -> { - AccountsRecord account = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(id)); - long balance = account.getBalance(); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", id, balance); - return balance; - }; - } - - // Run SQL code in a way that automatically handles the - // transaction retry logic so we do not have to duplicate it in - // various places. - private static long runTransaction(DSLContext session, Function fn) { - AtomicLong rv = new AtomicLong(0L); - AtomicInteger attemptCount = new AtomicInteger(0); - - while (attemptCount.get() < MAX_ATTEMPT_COUNT) { - attemptCount.incrementAndGet(); - - if (attemptCount.get() > 1) { - System.out.printf("APP: Entering retry loop again, iteration %d\n", attemptCount.get()); - } - - if (session.connectionResult(connection -> { - connection.setAutoCommit(false); - System.out.printf("APP: BEGIN;\n"); - - if (attemptCount.get() == MAX_ATTEMPT_COUNT) { - String err = String.format("hit max of %s attempts, aborting", MAX_ATTEMPT_COUNT); - throw new RuntimeException(err); - } - - // This block is only used to test the retry logic. - // It is not necessary in production code. See also - // the method 'testRetryLogic()'. - if (FORCE_RETRY) { - session.fetch("SELECT now()"); - } - - try { - rv.set(fn.apply(session)); - if (rv.get() != -1) { - connection.commit(); - System.out.printf("APP: COMMIT;\n"); - return true; - } - } catch (DataAccessException | SQLException e) { - String sqlState = e instanceof SQLException ? ((SQLException) e).getSQLState() : ((DataAccessException) e).sqlState(); - - if (RETRY_SQL_STATE.equals(sqlState)) { - // Since this is a transaction retry error, we - // roll back the transaction and sleep a little - // before trying again. Each time through the - // loop we sleep for a little longer than the last - // time (A.K.A. exponential backoff). - System.out.printf("APP: retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n", sqlState, e.getMessage(), attemptCount.get()); - System.out.printf("APP: ROLLBACK;\n"); - connection.rollback(); - int sleepMillis = (int)(Math.pow(2, attemptCount.get()) * 100) + RAND.nextInt(100); - System.out.printf("APP: Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMillis); - try { - Thread.sleep(sleepMillis); - } catch (InterruptedException ignored) { - // no-op - } - rv.set(-1L); - } else { - throw e; - } - } - - return false; - })) { - break; - } - } - - return rv.get(); - } - - public static void main(String[] args) throws Exception { - try (Connection connection = DriverManager.getConnection( - "jdbc:postgresql://localhost:26257/bank?sslmode=disable", - "maxroach", - "" - )) { - DSLContext ctx = DSL.using(connection, SQLDialect.COCKROACHDB, new Settings() - .withExecuteLogging(true) - .withRenderQuotedNames(RenderQuotedNames.NEVER)); - - // Initialise database with db.sql script - try (InputStream in = Sample.class.getResourceAsStream("/db.sql")) { - ctx.parser().parse(Source.of(in).readString()).executeBatch(); - } - - long fromAccountId = 1; - long toAccountId = 2; - long transferAmount = 100; - - if (FORCE_RETRY) { - System.out.printf("APP: About to test retry logic in 'runTransaction'\n"); - runTransaction(ctx, forceRetryLogic()); - } else { - - runTransaction(ctx, addAccounts()); - long fromBalance = runTransaction(ctx, getAccountBalance(fromAccountId)); - long toBalance = runTransaction(ctx, getAccountBalance(toAccountId)); - if (fromBalance != -1 && toBalance != -1) { - // Success! - System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalance); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalance); - } - - // Transfer $100 from account 1 to account 2 - long transferResult = runTransaction(ctx, transferFunds(fromAccountId, toAccountId, transferAmount)); - if (transferResult != -1) { - // Success! - System.out.printf("APP: transferFunds(%d, %d, %d) --> %d \n", fromAccountId, toAccountId, transferAmount, transferResult); - - long fromBalanceAfter = runTransaction(ctx, getAccountBalance(fromAccountId)); - long toBalanceAfter = runTransaction(ctx, getAccountBalance(toAccountId)); - if (fromBalanceAfter != -1 && toBalanceAfter != -1) { - // Success! - System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalanceAfter); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalanceAfter); - } - } - } - } - } -} diff --git a/src/current/_includes/v22.1/app/insecure/jooq-basic-sample/jooq-basic-sample.zip b/src/current/_includes/v22.1/app/insecure/jooq-basic-sample/jooq-basic-sample.zip deleted file mode 100644 index f11f86b8f43..00000000000 Binary files a/src/current/_includes/v22.1/app/insecure/jooq-basic-sample/jooq-basic-sample.zip and /dev/null differ diff --git a/src/current/_includes/v22.1/app/insecure/upperdb-basic-sample/main.go b/src/current/_includes/v22.1/app/insecure/upperdb-basic-sample/main.go deleted file mode 100644 index 5c855356d7e..00000000000 --- a/src/current/_includes/v22.1/app/insecure/upperdb-basic-sample/main.go +++ /dev/null @@ -1,185 +0,0 @@ -package main - -import ( - "fmt" - "log" - "time" - - "github.com/upper/db/v4" - "github.com/upper/db/v4/adapter/cockroachdb" -) - -// The settings variable stores connection details. -var settings = cockroachdb.ConnectionURL{ - Host: "localhost", - Database: "bank", - User: "maxroach", - Options: map[string]string{ - // Insecure node. - "sslmode": "disable", - }, -} - -// Accounts is a handy way to represent a collection. -func Accounts(sess db.Session) db.Store { - return sess.Collection("accounts") -} - -// Account is used to represent a single record in the "accounts" table. -type Account struct { - ID uint64 `db:"id,omitempty"` - Balance int64 `db:"balance"` -} - -// Collection is required in order to create a relation between the Account -// struct and the "accounts" table. -func (a *Account) Store(sess db.Session) db.Store { - return Accounts(sess) -} - -// createTables creates all the tables that are neccessary to run this example. -func createTables(sess db.Session) error { - _, err := sess.SQL().Exec(` - CREATE TABLE IF NOT EXISTS accounts ( - ID SERIAL PRIMARY KEY, - balance INT - ) - `) - if err != nil { - return err - } - return nil -} - -// crdbForceRetry can be used to simulate a transaction error and -// demonstrate upper/db's ability to retry the transaction automatically. -// -// By default, upper/db will retry the transaction five times, if you want -// to modify this number use: sess.SetMaxTransactionRetries(n). -// -// This is only used for demonstration purposes and not intended -// for production code. -func crdbForceRetry(sess db.Session) error { - var err error - - // The first statement in a transaction can be retried transparently on the - // server, so we need to add a placeholder statement so that our - // force_retry() statement isn't the first one. - _, err = sess.SQL().Exec(`SELECT 1`) - if err != nil { - return err - } - - // If force_retry is called during the specified interval from the beginning - // of the transaction it returns a retryable error. If not, 0 is returned - // instead of an error. - _, err = sess.SQL().Exec(`SELECT crdb_internal.force_retry('1s'::INTERVAL)`) - if err != nil { - return err - } - - return nil -} - -func main() { - // Connect to the local CockroachDB node. - sess, err := cockroachdb.Open(settings) - if err != nil { - log.Fatal("cockroachdb.Open: ", err) - } - defer sess.Close() - - // Adjust this number to fit your specific needs (set to 5, by default) - // sess.SetMaxTransactionRetries(10) - - // Create the "accounts" table - createTables(sess) - - // Delete all the previous items in the "accounts" table. - err = Accounts(sess).Truncate() - if err != nil { - log.Fatal("Truncate: ", err) - } - - // Create a new account with a balance of 1000. - account1 := Account{Balance: 1000} - err = Accounts(sess).InsertReturning(&account1) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Create a new account with a balance of 250. - account2 := Account{Balance: 250} - err = Accounts(sess).InsertReturning(&account2) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Printing records - printRecords(sess) - - // Change the balance of the first account. - account1.Balance = 500 - err = sess.Save(&account1) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Change the balance of the second account. - account2.Balance = 999 - err = sess.Save(&account2) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Printing records - printRecords(sess) - - // Delete the first record. - err = sess.Delete(&account1) - if err != nil { - log.Fatal("Delete: ", err) - } - - startTime := time.Now() - - // Add a couple of new records within a transaction. - err = sess.Tx(func(tx db.Session) error { - var err error - - if err = tx.Save(&Account{Balance: 887}); err != nil { - return err - } - - if time.Now().Sub(startTime) < time.Second*1 { - // Will fail continuously for 2 seconds. - if err = crdbForceRetry(tx); err != nil { - return err - } - } - - if err = tx.Save(&Account{Balance: 342}); err != nil { - return err - } - - return nil - }) - if err != nil { - log.Fatal("Could not commit transaction: ", err) - } - - // Printing records - printRecords(sess) -} - -func printRecords(sess db.Session) { - accounts := []Account{} - err := Accounts(sess).Find().All(&accounts) - if err != nil { - log.Fatal("Find: ", err) - } - log.Printf("Balances:") - for i := range accounts { - fmt.Printf("\taccounts[%d]: %d\n", accounts[i].ID, accounts[i].Balance) - } -} diff --git a/src/current/_includes/v22.1/app/java-tls-note.md b/src/current/_includes/v22.1/app/java-tls-note.md deleted file mode 100644 index a1fd6f61600..00000000000 --- a/src/current/_includes/v22.1/app/java-tls-note.md +++ /dev/null @@ -1,13 +0,0 @@ -{{site.data.alerts.callout_danger}} -CockroachDB supports TLS 1.2 and 1.3, and uses 1.3 by default. - -[A bug in the TLS 1.3 implementation](https://bugs.openjdk.java.net/browse/JDK-8236039) in Java 11 versions lower than 11.0.7 and Java 13 versions lower than 13.0.3 makes the versions incompatible with CockroachDB. - -If an incompatible version is used, the client may throw the following exception: - -`javax.net.ssl.SSLHandshakeException: extension (5) should not be presented in certificate_request` - -For applications running Java 11 or 13, make sure that you have version 11.0.7 or higher, or 13.0.3 or higher. - -If you cannot upgrade to a version higher than 11.0.7 or 13.0.3, you must configure the application to use TLS 1.2. For example, when starting your app, use: `$ java -Djdk.tls.client.protocols=TLSv1.2 appName` -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v22.1/app/java-version-note.md b/src/current/_includes/v22.1/app/java-version-note.md deleted file mode 100644 index 3d559314262..00000000000 --- a/src/current/_includes/v22.1/app/java-version-note.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -We recommend using Java versions 8+ with CockroachDB. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v22.1/app/jooq-basic-sample/Sample.java b/src/current/_includes/v22.1/app/jooq-basic-sample/Sample.java deleted file mode 100644 index fd71726603e..00000000000 --- a/src/current/_includes/v22.1/app/jooq-basic-sample/Sample.java +++ /dev/null @@ -1,215 +0,0 @@ -package com.cockroachlabs; - -import com.cockroachlabs.example.jooq.db.Tables; -import com.cockroachlabs.example.jooq.db.tables.records.AccountsRecord; -import org.jooq.DSLContext; -import org.jooq.SQLDialect; -import org.jooq.Source; -import org.jooq.conf.RenderQuotedNames; -import org.jooq.conf.Settings; -import org.jooq.exception.DataAccessException; -import org.jooq.impl.DSL; - -import java.io.InputStream; -import java.sql.Connection; -import java.sql.DriverManager; -import java.sql.SQLException; -import java.util.*; -import java.util.concurrent.atomic.AtomicInteger; -import java.util.concurrent.atomic.AtomicLong; -import java.util.function.Function; - -import static com.cockroachlabs.example.jooq.db.Tables.ACCOUNTS; - -public class Sample { - - private static final Random RAND = new Random(); - private static final boolean FORCE_RETRY = false; - private static final String RETRY_SQL_STATE = "40001"; - private static final int MAX_ATTEMPT_COUNT = 6; - - private static Function addAccounts() { - return ctx -> { - long rv = 0; - - ctx.delete(ACCOUNTS).execute(); - ctx.batchInsert( - new AccountsRecord(1L, 1000L), - new AccountsRecord(2L, 250L), - new AccountsRecord(3L, 314159L) - ).execute(); - - rv = 1; - System.out.printf("APP: addAccounts() --> %d\n", rv); - return rv; - }; - } - - private static Function transferFunds(long fromId, long toId, long amount) { - return ctx -> { - long rv = 0; - - AccountsRecord fromAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(fromId)); - AccountsRecord toAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(toId)); - - if (!(amount > fromAccount.getBalance())) { - fromAccount.setBalance(fromAccount.getBalance() - amount); - toAccount.setBalance(toAccount.getBalance() + amount); - - ctx.batchUpdate(fromAccount, toAccount).execute(); - rv = amount; - System.out.printf("APP: transferFunds(%d, %d, %d) --> %d\n", fromId, toId, amount, rv); - } - - return rv; - }; - } - - // Test our retry handling logic if FORCE_RETRY is true. This - // method is only used to test the retry logic. It is not - // intended for production code. - private static Function forceRetryLogic() { - return ctx -> { - long rv = -1; - try { - System.out.printf("APP: testRetryLogic: BEFORE EXCEPTION\n"); - ctx.execute("SELECT crdb_internal.force_retry('1s')"); - } catch (DataAccessException e) { - System.out.printf("APP: testRetryLogic: AFTER EXCEPTION\n"); - throw e; - } - return rv; - }; - } - - private static Function getAccountBalance(long id) { - return ctx -> { - AccountsRecord account = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(id)); - long balance = account.getBalance(); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", id, balance); - return balance; - }; - } - - // Run SQL code in a way that automatically handles the - // transaction retry logic so we do not have to duplicate it in - // various places. - private static long runTransaction(DSLContext session, Function fn) { - AtomicLong rv = new AtomicLong(0L); - AtomicInteger attemptCount = new AtomicInteger(0); - - while (attemptCount.get() < MAX_ATTEMPT_COUNT) { - attemptCount.incrementAndGet(); - - if (attemptCount.get() > 1) { - System.out.printf("APP: Entering retry loop again, iteration %d\n", attemptCount.get()); - } - - if (session.connectionResult(connection -> { - connection.setAutoCommit(false); - System.out.printf("APP: BEGIN;\n"); - - if (attemptCount.get() == MAX_ATTEMPT_COUNT) { - String err = String.format("hit max of %s attempts, aborting", MAX_ATTEMPT_COUNT); - throw new RuntimeException(err); - } - - // This block is only used to test the retry logic. - // It is not necessary in production code. See also - // the method 'testRetryLogic()'. - if (FORCE_RETRY) { - session.fetch("SELECT now()"); - } - - try { - rv.set(fn.apply(session)); - if (rv.get() != -1) { - connection.commit(); - System.out.printf("APP: COMMIT;\n"); - return true; - } - } catch (DataAccessException | SQLException e) { - String sqlState = e instanceof SQLException ? ((SQLException) e).getSQLState() : ((DataAccessException) e).sqlState(); - - if (RETRY_SQL_STATE.equals(sqlState)) { - // Since this is a transaction retry error, we - // roll back the transaction and sleep a little - // before trying again. Each time through the - // loop we sleep for a little longer than the last - // time (A.K.A. exponential backoff). - System.out.printf("APP: retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n", sqlState, e.getMessage(), attemptCount.get()); - System.out.printf("APP: ROLLBACK;\n"); - connection.rollback(); - int sleepMillis = (int)(Math.pow(2, attemptCount.get()) * 100) + RAND.nextInt(100); - System.out.printf("APP: Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMillis); - try { - Thread.sleep(sleepMillis); - } catch (InterruptedException ignored) { - // no-op - } - rv.set(-1L); - } else { - throw e; - } - } - - return false; - })) { - break; - } - } - - return rv.get(); - } - - public static void main(String[] args) throws Exception { - try (Connection connection = DriverManager.getConnection( - "jdbc:postgresql://localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key.pk8&sslcert=certs/client.maxroach.crt", - "maxroach", - "" - )) { - DSLContext ctx = DSL.using(connection, SQLDialect.COCKROACHDB, new Settings() - .withExecuteLogging(true) - .withRenderQuotedNames(RenderQuotedNames.NEVER)); - - // Initialise database with db.sql script - try (InputStream in = Sample.class.getResourceAsStream("/db.sql")) { - ctx.parser().parse(Source.of(in).readString()).executeBatch(); - } - - long fromAccountId = 1; - long toAccountId = 2; - long transferAmount = 100; - - if (FORCE_RETRY) { - System.out.printf("APP: About to test retry logic in 'runTransaction'\n"); - runTransaction(ctx, forceRetryLogic()); - } else { - - runTransaction(ctx, addAccounts()); - long fromBalance = runTransaction(ctx, getAccountBalance(fromAccountId)); - long toBalance = runTransaction(ctx, getAccountBalance(toAccountId)); - if (fromBalance != -1 && toBalance != -1) { - // Success! - System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalance); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalance); - } - - // Transfer $100 from account 1 to account 2 - long transferResult = runTransaction(ctx, transferFunds(fromAccountId, toAccountId, transferAmount)); - if (transferResult != -1) { - // Success! - System.out.printf("APP: transferFunds(%d, %d, %d) --> %d \n", fromAccountId, toAccountId, transferAmount, transferResult); - - long fromBalanceAfter = runTransaction(ctx, getAccountBalance(fromAccountId)); - long toBalanceAfter = runTransaction(ctx, getAccountBalance(toAccountId)); - if (fromBalanceAfter != -1 && toBalanceAfter != -1) { - // Success! - System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalanceAfter); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalanceAfter); - } - } - } - } - } -} diff --git a/src/current/_includes/v22.1/app/jooq-basic-sample/jooq-basic-sample.zip b/src/current/_includes/v22.1/app/jooq-basic-sample/jooq-basic-sample.zip deleted file mode 100644 index 859305478c0..00000000000 Binary files a/src/current/_includes/v22.1/app/jooq-basic-sample/jooq-basic-sample.zip and /dev/null differ diff --git a/src/current/_includes/v22.1/app/pkcs8-gen.md b/src/current/_includes/v22.1/app/pkcs8-gen.md deleted file mode 100644 index 411d262e970..00000000000 --- a/src/current/_includes/v22.1/app/pkcs8-gen.md +++ /dev/null @@ -1,8 +0,0 @@ -You can pass the [`--also-generate-pkcs8-key` flag](cockroach-cert.html#flag-pkcs8) to [`cockroach cert`](cockroach-cert.html) to generate a key in [PKCS#8 format](https://tools.ietf.org/html/rfc5208), which is the standard key encoding format in Java. For example, if you have the user `max`: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client max --certs-dir=certs --ca-key=my-safe-directory/ca.key --also-generate-pkcs8-key -~~~ - -The generated PKCS8 key will be named `client.max.key.pk8`. diff --git a/src/current/_includes/v22.1/app/python/sqlalchemy/sqlalchemy-large-txns.py b/src/current/_includes/v22.1/app/python/sqlalchemy/sqlalchemy-large-txns.py deleted file mode 100644 index 7a6ef82c2e3..00000000000 --- a/src/current/_includes/v22.1/app/python/sqlalchemy/sqlalchemy-large-txns.py +++ /dev/null @@ -1,57 +0,0 @@ -from sqlalchemy import create_engine, Column, Float, Integer -from sqlalchemy.ext.declarative import declarative_base -from sqlalchemy.orm import sessionmaker -from cockroachdb.sqlalchemy import run_transaction -from random import random - -Base = declarative_base() - -# The code below assumes you have run the following SQL statements. - -# CREATE DATABASE pointstore; - -# USE pointstore; - -# CREATE TABLE points ( -# id INT PRIMARY KEY DEFAULT unique_rowid(), -# x FLOAT NOT NULL, -# y FLOAT NOT NULL, -# z FLOAT NOT NULL -# ); - -engine = create_engine( - # For cockroach demo: - 'cockroachdb://:@:/bank?sslmode=require', - echo=True # Log SQL queries to stdout -) - - -class Point(Base): - __tablename__ = 'points' - id = Column(Integer, primary_key=True) - x = Column(Float) - y = Column(Float) - z = Column(Float) - - -def add_points(num_points): - chunk_size = 1000 # Tune this based on object sizes. - - def add_points_helper(sess, chunk, num_points): - points = [] - for i in range(chunk, min(chunk + chunk_size, num_points)): - points.append( - Point(x=random()*1024, y=random()*1024, z=random()*1024) - ) - sess.bulk_save_objects(points) - - for chunk in range(0, num_points, chunk_size): - run_transaction( - sessionmaker(bind=engine), - lambda s: add_points_helper( - s, chunk, min(chunk + chunk_size, num_points) - ) - ) - - -add_points(10000) diff --git a/src/current/_includes/v22.1/app/retry-errors.md b/src/current/_includes/v22.1/app/retry-errors.md deleted file mode 100644 index 3a20939e97c..00000000000 --- a/src/current/_includes/v22.1/app/retry-errors.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -Your application should [use a retry loop to handle transaction errors](error-handling-and-troubleshooting.html#transaction-retry-errors) that can occur under [contention]({{ link_prefix }}performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/app/see-also-links.md b/src/current/_includes/v22.1/app/see-also-links.md deleted file mode 100644 index ee55292e744..00000000000 --- a/src/current/_includes/v22.1/app/see-also-links.md +++ /dev/null @@ -1,9 +0,0 @@ -You might also be interested in the following pages: - -- [Client Connection Parameters](connection-parameters.html) -- [Connection Pooling](connection-pooling.html) -- [Data Replication](demo-replication-and-rebalancing.html) -- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html) -- [Replication & Rebalancing](demo-replication-and-rebalancing.html) -- [Cross-Cloud Migration](demo-automatic-cloud-migration.html) -- [Automated Operations](orchestrate-a-local-cluster-with-kubernetes-insecure.html) diff --git a/src/current/_includes/v22.1/app/start-cockroachdb.md b/src/current/_includes/v22.1/app/start-cockroachdb.md deleted file mode 100644 index a3348e2c4da..00000000000 --- a/src/current/_includes/v22.1/app/start-cockroachdb.md +++ /dev/null @@ -1,58 +0,0 @@ -Choose whether to run a temporary local cluster or a free CockroachDB cluster on CockroachDB {{ site.data.products.serverless }}. The instructions below will adjust accordingly. - -
- - -
- -
- -### Create a free cluster - -{% include cockroachcloud/quickstart/create-a-free-cluster.md %} - -### Set up your cluster connection - -The **Connection info** dialog shows information about how to connect to your cluster. - -1. Click the **Choose your OS** dropdown, and select the operating system of your local machine. - -1. Click the **Connection string** tab in the **Connection info** dialog. - -1. Open a new terminal on your local machine, and run the command provided in step **1** to download the CA certificate. This certificate is required by some clients connecting to CockroachDB {{ site.data.products.cloud }}. - -1. Copy the connection string provided in step **2** to a secure location. - - {{site.data.alerts.callout_info}} - The connection string is pre-populated with your username, password, cluster name, and other details. Your password, in particular, will be provided *only once*. Save it in a secure place (Cockroach Labs recommends a password manager) to connect to your cluster in the future. If you forget your password, you can reset it by going to the **SQL Users** page for the cluster, found at `https://cockroachlabs.cloud/cluster//users`. - {{site.data.alerts.end}} - -
- -
- -1. If you haven't already, [download the CockroachDB binary](install-cockroachdb.html). -1. Run the [`cockroach demo`](cockroach-demo.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach demo \ - --no-example-database - ~~~ - - This starts a temporary, in-memory cluster and opens an interactive SQL shell to the cluster. Any changes to the database will not persist after the cluster is stopped. - - {{site.data.alerts.callout_info}} - If `cockroach demo` fails due to SSL authentication, make sure you have cleared any previously downloaded CA certificates from the directory `~/.postgresql`. - {{site.data.alerts.end}} - -1. Take note of the `(sql)` connection string in the SQL shell welcome text: - - ~~~ - # Connection parameters: - # (webui) http://127.0.0.1:8080/demologin?password=demo76950&username=demo - # (sql) postgres://demo:demo76950@127.0.0.1:26257?sslmode=require - # (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26257 - ~~~ - -
diff --git a/src/current/_includes/v22.1/app/upperdb-basic-sample/main.go b/src/current/_includes/v22.1/app/upperdb-basic-sample/main.go deleted file mode 100644 index 3e838fe43e2..00000000000 --- a/src/current/_includes/v22.1/app/upperdb-basic-sample/main.go +++ /dev/null @@ -1,187 +0,0 @@ -package main - -import ( - "fmt" - "log" - "time" - - "github.com/upper/db/v4" - "github.com/upper/db/v4/adapter/cockroachdb" -) - -// The settings variable stores connection details. -var settings = cockroachdb.ConnectionURL{ - Host: "localhost", - Database: "bank", - User: "maxroach", - Options: map[string]string{ - // Secure node. - "sslrootcert": "certs/ca.crt", - "sslkey": "certs/client.maxroach.key", - "sslcert": "certs/client.maxroach.crt", - }, -} - -// Accounts is a handy way to represent a collection. -func Accounts(sess db.Session) db.Store { - return sess.Collection("accounts") -} - -// Account is used to represent a single record in the "accounts" table. -type Account struct { - ID uint64 `db:"id,omitempty"` - Balance int64 `db:"balance"` -} - -// Collection is required in order to create a relation between the Account -// struct and the "accounts" table. -func (a *Account) Store(sess db.Session) db.Store { - return Accounts(sess) -} - -// createTables creates all the tables that are neccessary to run this example. -func createTables(sess db.Session) error { - _, err := sess.SQL().Exec(` - CREATE TABLE IF NOT EXISTS accounts ( - ID SERIAL PRIMARY KEY, - balance INT - ) - `) - if err != nil { - return err - } - return nil -} - -// crdbForceRetry can be used to simulate a transaction error and -// demonstrate upper/db's ability to retry the transaction automatically. -// -// By default, upper/db will retry the transaction five times, if you want -// to modify this number use: sess.SetMaxTransactionRetries(n). -// -// This is only used for demonstration purposes and not intended -// for production code. -func crdbForceRetry(sess db.Session) error { - var err error - - // The first statement in a transaction can be retried transparently on the - // server, so we need to add a placeholder statement so that our - // force_retry() statement isn't the first one. - _, err = sess.SQL().Exec(`SELECT 1`) - if err != nil { - return err - } - - // If force_retry is called during the specified interval from the beginning - // of the transaction it returns a retryable error. If not, 0 is returned - // instead of an error. - _, err = sess.SQL().Exec(`SELECT crdb_internal.force_retry('1s'::INTERVAL)`) - if err != nil { - return err - } - - return nil -} - -func main() { - // Connect to the local CockroachDB node. - sess, err := cockroachdb.Open(settings) - if err != nil { - log.Fatal("cockroachdb.Open: ", err) - } - defer sess.Close() - - // Adjust this number to fit your specific needs (set to 5, by default) - // sess.SetMaxTransactionRetries(10) - - // Create the "accounts" table - createTables(sess) - - // Delete all the previous items in the "accounts" table. - err = Accounts(sess).Truncate() - if err != nil { - log.Fatal("Truncate: ", err) - } - - // Create a new account with a balance of 1000. - account1 := Account{Balance: 1000} - err = Accounts(sess).InsertReturning(&account1) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Create a new account with a balance of 250. - account2 := Account{Balance: 250} - err = Accounts(sess).InsertReturning(&account2) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Printing records - printRecords(sess) - - // Change the balance of the first account. - account1.Balance = 500 - err = sess.Save(&account1) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Change the balance of the second account. - account2.Balance = 999 - err = sess.Save(&account2) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Printing records - printRecords(sess) - - // Delete the first record. - err = sess.Delete(&account1) - if err != nil { - log.Fatal("Delete: ", err) - } - - startTime := time.Now() - - // Add a couple of new records within a transaction. - err = sess.Tx(func(tx db.Session) error { - var err error - - if err = tx.Save(&Account{Balance: 887}); err != nil { - return err - } - - if time.Now().Sub(startTime) < time.Second*1 { - // Will fail continuously for 2 seconds. - if err = crdbForceRetry(tx); err != nil { - return err - } - } - - if err = tx.Save(&Account{Balance: 342}); err != nil { - return err - } - - return nil - }) - if err != nil { - log.Fatal("Could not commit transaction: ", err) - } - - // Printing records - printRecords(sess) -} - -func printRecords(sess db.Session) { - accounts := []Account{} - err := Accounts(sess).Find().All(&accounts) - if err != nil { - log.Fatal("Find: ", err) - } - log.Printf("Balances:") - for i := range accounts { - fmt.Printf("\taccounts[%d]: %d\n", accounts[i].ID, accounts[i].Balance) - } -} diff --git a/src/current/_includes/v22.1/backups/advanced-examples-list.md b/src/current/_includes/v22.1/backups/advanced-examples-list.md deleted file mode 100644 index fb519d7bfe0..00000000000 --- a/src/current/_includes/v22.1/backups/advanced-examples-list.md +++ /dev/null @@ -1,11 +0,0 @@ -For examples of advanced `BACKUP` and `RESTORE` use cases, see: - -- [Incremental backups with a specified destination](take-full-and-incremental-backups.html#incremental-backups-with-explicitly-specified-destinations) -- [Backup with revision history and point-in-time restore](take-backups-with-revision-history-and-restore-from-a-point-in-time.html) -- [Locality-aware backup and restore](take-and-restore-locality-aware-backups.html) -- [Encrypted backup and restore](take-and-restore-encrypted-backups.html) -- [Restore into a different database](restore.html#restore-tables-into-a-different-database) -- [Remove the foreign key before restore](restore.html#remove-the-foreign-key-before-restore) -- [Restoring users from `system.users` backup](restore.html#restoring-users-from-system-users-backup) -- [Show an incremental backup at a different location](show-backup.html#show-a-backup-taken-with-the-incremental-location-option) -- [Exclude a table's data from backups](take-full-and-incremental-backups.html#exclude-a-tables-data-from-backups) diff --git a/src/current/_includes/v22.1/backups/aws-auth-note.md b/src/current/_includes/v22.1/backups/aws-auth-note.md deleted file mode 100644 index 759a8ad1d3a..00000000000 --- a/src/current/_includes/v22.1/backups/aws-auth-note.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -The examples in this section use the **default** `AUTH=specified` parameter. For more detail on how to use `implicit` authentication with Amazon S3 buckets, read [Use Cloud Storage for Bulk Operations — Authentication](use-cloud-storage-for-bulk-operations.html#authentication). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/backups/azure-url-encode.md b/src/current/_includes/v22.1/backups/azure-url-encode.md deleted file mode 100644 index 41036bfea3d..00000000000 --- a/src/current/_includes/v22.1/backups/azure-url-encode.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -Azure storage containers **require** a [url encoded](https://en.wikipedia.org/wiki/Percent-encoding) `ACCOUNT_KEY` since it is base64-encoded and may contain +, /, = characters. For more detail on how to pass your Azure Storage credentials with this parameter, read [Use Cloud Storage for Bulk Operations — Authentication](use-cloud-storage-for-bulk-operations.html#authentication). -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v22.1/backups/backup-options.md b/src/current/_includes/v22.1/backups/backup-options.md deleted file mode 100644 index 2c6f112f38f..00000000000 --- a/src/current/_includes/v22.1/backups/backup-options.md +++ /dev/null @@ -1,7 +0,0 @@ - Option | Value | Description ------------------------------------------------------------------+-------------------------+------------------------------ -`revision_history` | N/A | Create a backup with full [revision history](take-backups-with-revision-history-and-restore-from-a-point-in-time.html), which records every change made to the cluster within the garbage collection period leading up to and including the given timestamp. -`encryption_passphrase` | [`STRING`](string.html) | The passphrase used to [encrypt the files](take-and-restore-encrypted-backups.html) (`BACKUP` manifest and data files) that the `BACKUP` statement generates. This same passphrase is needed to decrypt the file when it is used to [restore](take-and-restore-encrypted-backups.html) and to list the contents of the backup when using [`SHOW BACKUP`](show-backup.html). There is no practical limit on the length of the passphrase. -`DETACHED` | N/A | When a backup runs in `DETACHED` mode, it will execute asynchronously. The job ID will be returned after the backup [job creation](backup-architecture.html#job-creation-phase) completes. Note that with `DETACHED` specified, further job information and the job completion status will not be returned. For more on the differences between the returned job data, see the [example](backup.html#run-a-backup-asynchronously) below. To check on the job status, use the [`SHOW JOBS`](show-jobs.html) statement.

To run a backup within a [transaction](transactions.html), use the `DETACHED` option. -`kms` | [`STRING`](string.html) | The [key management service (KMS) URI](take-and-restore-encrypted-backups.html#uri-formats) (or a [comma-separated list of URIs](take-and-restore-encrypted-backups.html#take-a-backup-with-multi-region-encryption)) used to encrypt the files (`BACKUP` manifest and data files) that the `BACKUP` statement generates. This same KMS URI is needed to decrypt the file when it is used to [restore](take-and-restore-encrypted-backups.html#restore-from-an-encrypted-backup) and to list the contents of the backup when using [`SHOW BACKUP`](show-backup.html).

Currently, AWS KMS and Google Cloud KMS are supported. -`incremental_location` | [`STRING`](string.html) | Create an incremental backup in a different location than the default incremental backup location.

`WITH incremental_location = 'explicit_incrementals_URI'`

See [Incremental backups with explicitly specified destinations](take-full-and-incremental-backups.html#incremental-backups-with-explicitly-specified-destinations) for usage. diff --git a/src/current/_includes/v22.1/backups/backup-to-deprec.md b/src/current/_includes/v22.1/backups/backup-to-deprec.md deleted file mode 100644 index 1515e96c713..00000000000 --- a/src/current/_includes/v22.1/backups/backup-to-deprec.md +++ /dev/null @@ -1,7 +0,0 @@ -{{site.data.alerts.callout_danger}} -The `BACKUP ... TO` and `RESTORE ... FROM` syntax is **deprecated** as of v22.1 and will be removed in a future release. - -We recommend using the `BACKUP ... INTO {collectionURI}` syntax, which creates or adds to a [backup collection]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#backup-collections) in your storage location. For restoring backups, we recommend using `RESTORE FROM {backup} IN {collectionURI}` with `{backup}` being [`LATEST`]({% link {{ page.version.version }}/restore.md %}#restore-the-most-recent-backup) or a specific [subdirectory]({% link {{ page.version.version }}/restore.md %}#subdir-param). - -For guidance on the syntax for backups and restores, see the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}#examples) and [`RESTORE`]({% link {{ page.version.version }}/restore.md %}#examples) examples. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/backups/bulk-auth-options.md b/src/current/_includes/v22.1/backups/bulk-auth-options.md deleted file mode 100644 index ab02410dcac..00000000000 --- a/src/current/_includes/v22.1/backups/bulk-auth-options.md +++ /dev/null @@ -1,4 +0,0 @@ -The following examples make use of: - -- Amazon S3 connection strings. For guidance on connecting to other storage options or using other authentication parameters instead, read [Use Cloud Storage for Bulk Operations](use-cloud-storage-for-bulk-operations.html#example-file-urls). -- The **default** `AUTH=specified` parameter. For guidance on using `AUTH=implicit` authentication with Amazon S3 buckets instead, read [Use Cloud Storage for Bulk Operations — Authentication](use-cloud-storage-for-bulk-operations.html#authentication). \ No newline at end of file diff --git a/src/current/_includes/v22.1/backups/destination-file-privileges.md b/src/current/_includes/v22.1/backups/destination-file-privileges.md deleted file mode 100644 index 913e042461c..00000000000 --- a/src/current/_includes/v22.1/backups/destination-file-privileges.md +++ /dev/null @@ -1,12 +0,0 @@ -The destination file URL does **not** require the [`admin` role](security-reference/authorization.html#admin-role) in the following scenarios: - -- S3 and GS using `SPECIFIED` (and not `IMPLICIT`) credentials. Azure is always `SPECIFIED` by default. -- [Userfile](use-userfile-for-bulk-operations.html) - -The destination file URL **does** require the [`admin` role](security-reference/authorization.html#admin-role) in the following scenarios: - -- S3 or GS using `IMPLICIT` credentials -- Use of a [custom endpoint](https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/) on S3 -- [Nodelocal](cockroach-nodelocal-upload.html) - -We recommend using [cloud storage for bulk operations](use-cloud-storage-for-bulk-operations.html). diff --git a/src/current/_includes/v22.1/backups/encrypted-backup-description.md b/src/current/_includes/v22.1/backups/encrypted-backup-description.md deleted file mode 100644 index f0c39d2551a..00000000000 --- a/src/current/_includes/v22.1/backups/encrypted-backup-description.md +++ /dev/null @@ -1,11 +0,0 @@ -You can encrypt full or incremental backups with a passphrase by using the [`encryption_passphrase` option](backup.html#with-encryption-passphrase). Files written by the backup (including `BACKUP` manifests and data files) are encrypted using the specified passphrase to derive a key. To restore the encrypted backup, the same `encryption_passphrase` option (with the same passphrase) must be included in the [`RESTORE`](restore.html) statement. - -When used with [incremental backups](take-full-and-incremental-backups.html#incremental-backups), the `encryption_passphrase` option is applied to all the [backup file URLs](backup.html#backup-file-urls), which means the same passphrase must be used when appending another incremental backup to an existing backup. Similarly, when used with [locality-aware backups](take-and-restore-locality-aware-backups.html), the passphrase provided is applied to files in all localities. - -Encryption is done using [AES-256-GCM](https://en.wikipedia.org/wiki/Galois/Counter_Mode), and GCM is used to both encrypt and authenticate the files. A random [salt](https://en.wikipedia.org/wiki/Salt_(cryptography)) is used to derive a once-per-backup [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) key from the specified passphrase, and then a random [initialization vector](https://en.wikipedia.org/wiki/Initialization_vector) is used per-file. CockroachDB uses [PBKDF2](https://en.wikipedia.org/wiki/PBKDF2) with 64,000 iterations for the key derivation. - -{{site.data.alerts.callout_info}} -`BACKUP` and `RESTORE` will use more memory when using encryption, as both the plain-text and cipher-text of a given file are held in memory during encryption and decryption. -{{site.data.alerts.end}} - -For an example of an encrypted backup, see [Create an encrypted backup](take-and-restore-encrypted-backups.html#take-an-encrypted-backup-using-a-passphrase). diff --git a/src/current/_includes/v22.1/backups/file-size-setting.md b/src/current/_includes/v22.1/backups/file-size-setting.md deleted file mode 100644 index 8f94d415e11..00000000000 --- a/src/current/_includes/v22.1/backups/file-size-setting.md +++ /dev/null @@ -1,5 +0,0 @@ -{{site.data.alerts.callout_info}} -To set a target for the amount of backup data written to each backup file, use the `bulkio.backup.file_size` [cluster setting](cluster-settings.html). - -See the [`SET CLUSTER SETTING`](set-cluster-setting.html) page for more details on using cluster settings. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/backups/gcs-auth-note.md b/src/current/_includes/v22.1/backups/gcs-auth-note.md deleted file mode 100644 index 360ea21cb63..00000000000 --- a/src/current/_includes/v22.1/backups/gcs-auth-note.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -The examples in this section use the `AUTH=specified` parameter, which will be the default behavior in v21.2 and beyond for connecting to Google Cloud Storage. For more detail on how to pass your Google Cloud Storage credentials with this parameter, or, how to use `implicit` authentication, read [Use Cloud Storage for Bulk Operations — Authentication](use-cloud-storage-for-bulk-operations.html#authentication). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/backups/gcs-default-deprec.md b/src/current/_includes/v22.1/backups/gcs-default-deprec.md deleted file mode 100644 index aafea15e804..00000000000 --- a/src/current/_includes/v22.1/backups/gcs-default-deprec.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -**Deprecation notice:** Currently, GCS connections default to the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html). This default behavior will no longer be supported in v21.2. If you are relying on this default behavior, we recommend adjusting your queries and scripts to now specify the `AUTH` parameter you want to use. Similarly, if you are using the `cloudstorage.gs.default.key` cluster setting to authorize your GCS connection, we recommend switching to use `AUTH=specified` or `AUTH=implicit`. `AUTH=specified` will be the default behavior in v21.2 and beyond. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/backups/no-incremental-restore.md b/src/current/_includes/v22.1/backups/no-incremental-restore.md deleted file mode 100644 index b2f071c1e5e..00000000000 --- a/src/current/_includes/v22.1/backups/no-incremental-restore.md +++ /dev/null @@ -1 +0,0 @@ -When you restore from an incremental backup, you're restoring the **entire** table, database, or cluster. CockroachDB uses both the latest (or a [specific](restore.html#restore-a-specific-backup)) incremental backup and the full backup during this process. You cannot restore an incremental backup without a full backup. Furthermore, it is not possible to restore over a [table](restore.html#tables), [database](restore.html#databases), or [cluster](restore.html#full-cluster) with existing data. See [Restore types](restore.html#restore-types) for detail on the types of backups you can restore. diff --git a/src/current/_includes/v22.1/backups/retry-failure.md b/src/current/_includes/v22.1/backups/retry-failure.md deleted file mode 100644 index 81740c0a27d..00000000000 --- a/src/current/_includes/v22.1/backups/retry-failure.md +++ /dev/null @@ -1 +0,0 @@ -If a backup job encounters too many retryable errors, it will enter a [`failed` state](show-jobs.html#job-status) with the most recent error, which allows subsequent backups the chance to succeed. Refer to [Set up monitoring for the backup schedule](manage-a-backup-schedule.html#set-up-monitoring-for-the-backup-schedule) for metrics to track backup failures. \ No newline at end of file diff --git a/src/current/_includes/v22.1/backups/show-backup-replace-diagram.html b/src/current/_includes/v22.1/backups/show-backup-replace-diagram.html deleted file mode 100644 index 539b72b45da..00000000000 --- a/src/current/_includes/v22.1/backups/show-backup-replace-diagram.html +++ /dev/null @@ -1,50 +0,0 @@ -
- - - - - SHOW - - - BACKUPS - - - IN - - - collectionURI - - - BACKUP - - - SCHEMAS - - - FROM - - - subdirectory - - IN - - - collectionURI - - WITH - - - kv_option_list - - OPTIONS - - - ( - - - kv_option_list - - ) - - -
\ No newline at end of file diff --git a/src/current/_includes/v22.1/cdc/avro-limitations.md b/src/current/_includes/v22.1/cdc/avro-limitations.md deleted file mode 100644 index aa9cec6e5c3..00000000000 --- a/src/current/_includes/v22.1/cdc/avro-limitations.md +++ /dev/null @@ -1,29 +0,0 @@ -- [Decimals](decimal.html) must have precision specified. -- [`BYTES`](bytes.html) (or its aliases `BYTEA` and `BLOB`) are often used to store machine-readable data. When you stream these types through a changefeed with [`format=avro`](create-changefeed.html#format), CockroachDB does not encode or change the data. However, Avro clients can often include escape sequences to present the data in a printable format, which can interfere with deserialization. A potential solution is to hex-encode `BYTES` values when initially inserting them into CockroachDB. This will ensure that Avro clients can consistently decode the hexadecimal. Note that hex-encoding values at insertion will increase record size. -- [`BIT`](bit.html) and [`VARBIT`](bit.html) types are encoded as arrays of 64-bit integers. - - For efficiency, CockroachDB encodes `BIT` and `VARBIT` bitfield types as arrays of 64-bit integers. That is, [base-2 (binary format)](https://en.wikipedia.org/wiki/Binary_number#Conversion_to_and_from_other_numeral_systems) `BIT` and `VARBIT` data types are converted to base 10 and stored in arrays. Encoding in CockroachDB is [big-endian](https://en.wikipedia.org/wiki/Endianness), therefore the last value may have many trailing zeroes. For this reason, the first value of each array is the number of bits that are used in the last value of the array. - - For instance, if the bitfield is 129 bits long, there will be 4 integers in the array. The first integer will be `1`; representing the number of bits in the last value, the second integer will be the first 64 bits, the third integer will be bits 65–128, and the last integer will either be `0` or `9223372036854775808` (i.e., the integer with only the first bit set, or `1000000000000000000000000000000000000000000000000000000000000000` when base 2). - - This example is base-10 encoded into an array as follows: - - ~~~ - {"array": [1, , , 0 or 9223372036854775808]} - ~~~ - - For downstream processing, it is necessary to base-2 encode every element in the array (except for the first element). The first number in the array gives you the number of bits to take from the last base-2 number — that is, the most significant bits. So, in the example above this would be `1`. Finally, all the base-2 numbers can be appended together, which will result in the original number of bits, 129. - - In a different example of this process where the bitfield is 136 bits long, the array would be similar to the following when base-10 encoded: - - ~~~ - {"array": [8, 18293058736425533439, 18446744073709551615, 13690942867206307840]} - ~~~ - - To then work with this data, you would convert each of the elements in the array to base-2 numbers, besides the first element. For the above array, this would convert to: - - ~~~ - [8, 1111110111011011111111111111111111111111111111111111111111111111, 1111111111111111111111111111111111111111111111111111111111111111, 1011111000000000000000000000000000000000000000000000000000000000] - ~~~ - - Next, you use the first element in the array to take the number of bits from the last base-2 element, `10111110`. Finally, you append each of the base-2 numbers together — in the above array, the second, third, and truncated last element. This results in 136 bits, the original number of bits. diff --git a/src/current/_includes/v22.1/cdc/cdc-cloud-rangefeed.md b/src/current/_includes/v22.1/cdc/cdc-cloud-rangefeed.md deleted file mode 100644 index 85e6255848e..00000000000 --- a/src/current/_includes/v22.1/cdc/cdc-cloud-rangefeed.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -If you are working on a CockroachDB {{ site.data.products.serverless }} cluster, the `kv.rangefeed.enabled` cluster setting is enabled by default. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v22.1/cdc/client-key-encryption.md b/src/current/_includes/v22.1/cdc/client-key-encryption.md deleted file mode 100644 index c7c7be4c38c..00000000000 --- a/src/current/_includes/v22.1/cdc/client-key-encryption.md +++ /dev/null @@ -1 +0,0 @@ -**Note:** Client keys are often encrypted. You will receive an error if you pass an encrypted client key in your changefeed statement. To decrypt the client key, run: `openssl rsa -in key.pem -out key.decrypt.pem -passin pass:{PASSWORD}`. Once decrypted, be sure to update your changefeed statement to use the new `key.decrypt.pem` file instead. \ No newline at end of file diff --git a/src/current/_includes/v22.1/cdc/configure-all-changefeed.md b/src/current/_includes/v22.1/cdc/configure-all-changefeed.md deleted file mode 100644 index b2d87c8cd5e..00000000000 --- a/src/current/_includes/v22.1/cdc/configure-all-changefeed.md +++ /dev/null @@ -1,19 +0,0 @@ -It is useful to be able to pause all running changefeeds during troubleshooting, testing, or when a decrease in CPU load is needed. - -To pause all running changefeeds: - -{% include_cached copy-clipboard.html %} -~~~sql -PAUSE JOBS (WITH x AS (SHOW CHANGEFEED JOBS) SELECT job_id FROM x WHERE status = ('running')); -~~~ - -This will change the status for each of the running changefeeds to `paused`, which can be verified with [`SHOW CHANGEFEED JOBS`](show-jobs.html#show-changefeed-jobs). - -To resume all running changefeeds: - -{% include_cached copy-clipboard.html %} -~~~sql -RESUME JOBS (WITH x AS (SHOW CHANGEFEED JOBS) SELECT job_id FROM x WHERE status = ('paused')); -~~~ - -This will resume the changefeeds and update the status for each of the changefeeds to `running`. diff --git a/src/current/_includes/v22.1/cdc/confluent-cloud-sr-url.md b/src/current/_includes/v22.1/cdc/confluent-cloud-sr-url.md deleted file mode 100644 index 556adbd7bff..00000000000 --- a/src/current/_includes/v22.1/cdc/confluent-cloud-sr-url.md +++ /dev/null @@ -1 +0,0 @@ -To connect to Confluent Cloud, use the following URL structure: `'https://{API_KEY_ID}:{API_SECRET_URL_ENCODED}@{CONFLUENT_REGISTRY_URL}:443'`. See the [Stream a Changefeed to a Confluent Cloud Kafka Cluster](stream-a-changefeed-to-a-confluent-cloud-kafka-cluster.html#step-8-create-a-changefeed) tutorial for further detail. \ No newline at end of file diff --git a/src/current/_includes/v22.1/cdc/core-csv.md b/src/current/_includes/v22.1/cdc/core-csv.md deleted file mode 100644 index 4ee6bfc587d..00000000000 --- a/src/current/_includes/v22.1/cdc/core-csv.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -To determine how wide the columns need to be, the default `table` display format in `cockroach sql` buffers the results it receives from the server before printing them to the console. When consuming core changefeed data using `cockroach sql`, it's important to use a display format like `csv` that does not buffer its results. To set the display format, use the [`--format=csv` flag](cockroach-sql.html#sql-flag-format) when starting the [built-in SQL client](cockroach-sql.html), or set the [`\set display_format=csv` option](cockroach-sql.html#client-side-options) once the SQL client is open. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/cdc/core-url.md b/src/current/_includes/v22.1/cdc/core-url.md deleted file mode 100644 index 7241e203aa7..00000000000 --- a/src/current/_includes/v22.1/cdc/core-url.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -Because core changefeeds return results differently than other SQL statements, they require a dedicated database connection with specific settings around result buffering. In normal operation, CockroachDB improves performance by buffering results server-side before returning them to a client; however, result buffering is automatically turned off for core changefeeds. Core changefeeds also have different cancellation behavior than other queries: they can only be canceled by closing the underlying connection or issuing a [`CANCEL QUERY`](cancel-query.html) statement on a separate connection. Combined, these attributes of changefeeds mean that applications should explicitly create dedicated connections to consume changefeed data, instead of using a connection pool as most client drivers do by default. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/cdc/create-core-changefeed-avro.md b/src/current/_includes/v22.1/cdc/create-core-changefeed-avro.md deleted file mode 100644 index 14051253a22..00000000000 --- a/src/current/_includes/v22.1/cdc/create-core-changefeed-avro.md +++ /dev/null @@ -1,122 +0,0 @@ -In this example, you'll set up a core changefeed for a single-node cluster that emits Avro records. CockroachDB's Avro binary encoding convention uses the [Confluent Schema Registry](https://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html) to store Avro schemas. - -1. Use the [`cockroach start-single-node`](cockroach-start-single-node.html) command to start a single-node cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start-single-node \ - --insecure \ - --listen-addr=localhost \ - --background - ~~~ - -2. Download and extract the [Confluent Open Source platform](https://www.confluent.io/download/). - -3. Move into the extracted `confluent-` directory and start Confluent: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ./bin/confluent local services start - ~~~ - - Only `zookeeper`, `kafka`, and `schema-registry` are needed. To troubleshoot Confluent, see [their docs](https://docs.confluent.io/current/installation/installing_cp.html#zip-and-tar-archives) and the [Quick Start Guide](https://docs.confluent.io/platform/current/quickstart/ce-quickstart.html#ce-quickstart). - -4. As the `root` user, open the [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --url="postgresql://root@127.0.0.1:26257?sslmode=disable" --format=csv - ~~~ - - {% include {{ page.version.version }}/cdc/core-url.md %} - - {% include {{ page.version.version }}/cdc/core-csv.md %} - -5. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING kv.rangefeed.enabled = true; - ~~~ - -6. Create table `bar`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE bar (a INT PRIMARY KEY); - ~~~ - -7. Insert a row into the table: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO bar VALUES (0); - ~~~ - -8. Start the core changefeed: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > EXPERIMENTAL CHANGEFEED FOR bar WITH format = avro, confluent_schema_registry = 'http://localhost:8081'; - ~~~ - - ~~~ - table,key,value - bar,\000\000\000\000\001\002\000,\000\000\000\000\002\002\002\000 - ~~~ - -9. In a new terminal, add another row: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure -e "INSERT INTO bar VALUES (1)" - ~~~ - -10. Back in the terminal where the core changefeed is streaming, the output will appear: - - ~~~ - bar,\000\000\000\000\001\002\002,\000\000\000\000\002\002\002\002 - ~~~ - - Note that records may take a couple of seconds to display in the core changefeed. - -11. To stop streaming the changefeed, enter **CTRL+C** into the terminal where the changefeed is running. - -12. To stop `cockroach`: - - Get the process ID of the node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - ps -ef | grep cockroach | grep -v grep - ~~~ - - ~~~ - 501 21766 1 0 6:21PM ttys001 0:00.89 cockroach start-single-node --insecure --listen-addr=localhost - ~~~ - - Gracefully shut down the node, specifying its process ID: - - {% include_cached copy-clipboard.html %} - ~~~ shell - kill -TERM 21766 - ~~~ - - ~~~ - initiating graceful shutdown of server - server drained and shutdown completed - ~~~ - -13. To stop Confluent, move into the extracted `confluent-` directory and stop Confluent: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ./bin/confluent local services stop - ~~~ - - To terminate all Confluent processes, use: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ./bin/confluent local destroy - ~~~ diff --git a/src/current/_includes/v22.1/cdc/create-core-changefeed.md b/src/current/_includes/v22.1/cdc/create-core-changefeed.md deleted file mode 100644 index fa397cd36f5..00000000000 --- a/src/current/_includes/v22.1/cdc/create-core-changefeed.md +++ /dev/null @@ -1,98 +0,0 @@ -In this example, you'll set up a core changefeed for a single-node cluster. - -1. In a terminal window, start `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start-single-node \ - --insecure \ - --listen-addr=localhost \ - --background - ~~~ - -2. As the `root` user, open the [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - --url="postgresql://root@127.0.0.1:26257?sslmode=disable" \ - --format=csv - ~~~ - - {% include {{ page.version.version }}/cdc/core-url.md %} - - {% include {{ page.version.version }}/cdc/core-csv.md %} - -3. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING kv.rangefeed.enabled = true; - ~~~ - -4. Create table `foo`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE foo (a INT PRIMARY KEY); - ~~~ - -5. Insert a row into the table: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO foo VALUES (0); - ~~~ - -6. Start the core changefeed: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > EXPERIMENTAL CHANGEFEED FOR foo; - ~~~ - ~~~ - table,key,value - foo,[0],"{""after"": {""a"": 0}}" - ~~~ - -7. In a new terminal, add another row: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure -e "INSERT INTO foo VALUES (1)" - ~~~ - -8. Back in the terminal where the core changefeed is streaming, the following output has appeared: - - ~~~ - foo,[1],"{""after"": {""a"": 1}}" - ~~~ - - Note that records may take a couple of seconds to display in the core changefeed. - -9. To stop streaming the changefeed, enter **CTRL+C** into the terminal where the changefeed is running. - -10. To stop `cockroach`: - - Get the process ID of the node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - ps -ef | grep cockroach | grep -v grep - ~~~ - - ~~~ - 501 21766 1 0 6:21PM ttys001 0:00.89 cockroach start-single-node --insecure --listen-addr=localhost - ~~~ - - Gracefully shut down the node, specifying its process ID: - - {% include_cached copy-clipboard.html %} - ~~~ shell - kill -TERM 21766 - ~~~ - - ~~~ - initiating graceful shutdown of server - server drained and shutdown completed - ~~~ diff --git a/src/current/_includes/v22.1/cdc/create-example-db-cdc.md b/src/current/_includes/v22.1/cdc/create-example-db-cdc.md deleted file mode 100644 index 17902b10eac..00000000000 --- a/src/current/_includes/v22.1/cdc/create-example-db-cdc.md +++ /dev/null @@ -1,50 +0,0 @@ -1. Create a database called `cdc_demo`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE cdc_demo; - ~~~ - -1. Set the database as the default: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET DATABASE = cdc_demo; - ~~~ - -1. Create a table and add data: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE office_dogs ( - id INT PRIMARY KEY, - name STRING); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO office_dogs VALUES - (1, 'Petee'), - (2, 'Carl'); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > UPDATE office_dogs SET name = 'Petee H' WHERE id = 1; - ~~~ - -1. Create another table and add data: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE employees ( - dog_id INT REFERENCES office_dogs (id), - employee_name STRING); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO employees VALUES - (1, 'Lauren'), - (2, 'Spencer'); - ~~~ diff --git a/src/current/_includes/v22.1/cdc/external-urls.md b/src/current/_includes/v22.1/cdc/external-urls.md deleted file mode 100644 index f4aa029779a..00000000000 --- a/src/current/_includes/v22.1/cdc/external-urls.md +++ /dev/null @@ -1,48 +0,0 @@ -~~~ -[scheme]://[host]/[path]?[parameters] -~~~ - -Location | Scheme | Host | Parameters | -|-------------------------------------------------------------+-------------+--------------------------------------------------+---------------------------------------------------------------------------- -Amazon | `s3` | Bucket name | `AUTH` [1](#considerations) (optional; can be `implicit` or `specified`), `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN` -Azure | `azure` | N/A (see [Example file URLs](#example-file-urls) | `AZURE_ACCOUNT_KEY`, `AZURE_ACCOUNT_NAME` -Google Cloud [2](#considerations) | `gs` | Bucket name | `AUTH` (optional; can be `default`, `implicit`, or `specified`), `CREDENTIALS` -HTTP [3](#considerations) | `http` | Remote host | N/A -NFS/Local [4](#considerations) | `nodelocal` | `nodeID` or `self` [5](#considerations) (see [Example file URLs](#example-file-urls)) | N/A -S3-compatible services [6](#considerations) | `s3` | Bucket name | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`, `AWS_REGION` [7](#considerations) (optional), `AWS_ENDPOINT` - -{{site.data.alerts.callout_info}} -The location parameters often contain special characters that need to be URI-encoded. Use Javascript's [`encodeURIComponent`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent) function or Go language's [`url.QueryEscape`](https://golang.org/pkg/net/url/#QueryEscape) function to URI-encode the parameters. Other languages provide similar functions to URI-encode special characters. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -If your environment requires an HTTP or HTTPS proxy server for outgoing connections, you can set the standard `HTTP_PROXY` and `HTTPS_PROXY` environment variables when starting CockroachDB. - - If you cannot run a full proxy, you can disable external HTTP(S) access (as well as custom HTTP(S) endpoints) when performing bulk operations (e.g., [`BACKUP`](backup.html), [`RESTORE`](restore.html), etc.) by using the [`--external-io-disable-http` flag](cockroach-start.html#security). You can also disable the use of implicit credentials when accessing external cloud storage services for various bulk operations by using the [`--external-io-disable-implicit-credentials` flag](cockroach-start.html#security). -{{site.data.alerts.end}} - - - -- 1 If the `AUTH` parameter is not provided, AWS connections default to `specified` and the access keys must be provided in the URI parameters. If the `AUTH` parameter is `implicit`, the access keys can be omitted and [the credentials will be loaded from the environment](https://docs.aws.amazon.com/sdk-for-go/api/aws/session/). - -- 2 If the `AUTH` parameter is not specified, the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html) will be used if it is non-empty, otherwise the `implicit` behavior is used. If the `AUTH` parameter is `implicit`, all GCS connections use Google's [default authentication strategy](https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application). If the `AUTH` parameter is `default`, the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html) must be set to the contents of a [service account file](https://cloud.google.com/docs/authentication/production#obtaining_and_providing_service_account_credentials_manually) which will be used during authentication. If the `AUTH` parameter is `specified`, GCS connections are authenticated on a per-statement basis, which allows the JSON key object to be sent in the `CREDENTIALS` parameter. The JSON key object should be Base64-encoded (using the standard encoding in [RFC 4648](https://tools.ietf.org/html/rfc4648)). - -- 3 You can create your own HTTP server with [Caddy or nginx](use-a-local-file-server-for-bulk-operations.html). A custom root CA can be appended to the system's default CAs by setting the `cloudstorage.http.custom_ca` [cluster setting](cluster-settings.html), which will be used when verifying certificates from HTTPS URLs. - -- 4 The file system backup location on the NFS drive is relative to the path specified by the `--external-io-dir` flag set while [starting the node](cockroach-start.html). If the flag is set to `disabled`, then imports from local directories and NFS drives are disabled. - -- 5 Using a `nodeID` is required and the data files will be in the `extern` directory of the specified node. In most cases (including single-node clusters), using `nodelocal://1/` is sufficient. Use `self` if you do not want to specify a `nodeID`, and the individual data files will be in the `extern` directories of arbitrary nodes; however, to work correctly, each node must have the [`--external-io-dir` flag](cockroach-start.html#general) point to the same NFS mount or other network-backed, shared storage. - -- 6 A custom root CA can be appended to the system's default CAs by setting the `cloudstorage.http.custom_ca` [cluster setting](cluster-settings.html), which will be used when verifying certificates from an S3-compatible service. - -- 7 The `AWS_REGION` parameter is optional since it is not a required parameter for most S3-compatible services. Specify the parameter only if your S3-compatible service requires it. - -#### Example file URLs - -Location | Example --------------+---------------------------------------------------------------------------------- -Amazon S3 | `s3://acme-co/employees?AWS_ACCESS_KEY_ID=123&AWS_SECRET_ACCESS_KEY=456` -Azure | `azure://employees?AZURE_ACCOUNT_KEY=123&AZURE_ACCOUNT_NAME=acme-co` -Google Cloud | `gs://acme-co` -HTTP | `http://localhost:8080/employees` -NFS/Local | `nodelocal://1/path/employees`, `nodelocal://self/nfsmount/backups/employees` [5](#considerations) diff --git a/src/current/_includes/v22.1/cdc/initial-scan-limit-alter-changefeed.md b/src/current/_includes/v22.1/cdc/initial-scan-limit-alter-changefeed.md deleted file mode 100644 index feb0c8748e4..00000000000 --- a/src/current/_includes/v22.1/cdc/initial-scan-limit-alter-changefeed.md +++ /dev/null @@ -1,2 +0,0 @@ -You cannot use the new `initial_scan = "yes"/"no"/"only"` syntax with {% if page.name == "alter-changefeed.md" %} `ALTER CHANGEFEED` {% else %} -[`ALTER CHANGEFEED`](alter-changefeed.html) {% endif %} in v22.1. To ensure that you can modify a changefeed with the `initial_scan` options, use the previous syntax of `initial_scan`, `no_initial_scan`, and `initial_scan_only`. \ No newline at end of file diff --git a/src/current/_includes/v22.1/cdc/metrics-labels.md b/src/current/_includes/v22.1/cdc/metrics-labels.md deleted file mode 100644 index 398f6f72ca2..00000000000 --- a/src/current/_includes/v22.1/cdc/metrics-labels.md +++ /dev/null @@ -1,10 +0,0 @@ -To measure metrics per changefeed, you can define a "metrics label" for one or multiple changefeed(s). The changefeed(s) will increment each [changefeed metric](monitor-and-debug-changefeeds.html#metrics). Metrics label information is sent with time-series metrics to `http://{host}:{http-port}/_status/vars`, viewable via the [Prometheus endpoint](monitoring-and-alerting.html#prometheus-endpoint). An aggregated metric of all changefeeds is also measured. - -It is necessary to consider the following when applying metrics labels to changefeeds: - -- Metrics labels are **not** available in CockroachDB {{ site.data.products.cloud }}. -- The `COCKROACH_EXPERIMENTAL_ENABLE_PER_CHANGEFEED_METRICS` [environment variable](cockroach-commands.html#environment-variables) must be specified to use this feature. -- The `server.child_metrics.enabled` [cluster setting](cluster-settings.html) must be set to `true` before using the `metrics_label` option. -- Metrics label information is sent to the `_status/vars` endpoint, but will **not** show up in [`debug.zip`](cockroach-debug-zip.html) or the [DB Console](ui-overview.html). -- Introducing labels to isolate a changefeed's metrics can increase cardinality significantly. There is a limit of 1024 unique labels in place to prevent cardinality explosion. That is, when labels are applied to high-cardinality data (data with a higher number of unique values), each changefeed with a label then results in more metrics data to multiply together, which will grow over time. This will have an impact on performance as the metric-series data per changefeed quickly populates against its label. -- The maximum length of a metrics label is 128 bytes. diff --git a/src/current/_includes/v22.1/cdc/modify-changefeed.md b/src/current/_includes/v22.1/cdc/modify-changefeed.md deleted file mode 100644 index 8ca39aff5ad..00000000000 --- a/src/current/_includes/v22.1/cdc/modify-changefeed.md +++ /dev/null @@ -1,9 +0,0 @@ -To modify an {{ site.data.products.enterprise }} changefeed, [pause](create-and-configure-changefeeds.html#pause) the job and then use: - -~~~ sql -ALTER CHANGEFEED job_id {ADD table DROP table SET option UNSET option}; -~~~ - -You can add new table targets, remove them, set new [changefeed options](create-changefeed.html#options), and unset them. - -For more information, see [`ALTER CHANGEFEED`](alter-changefeed.html). diff --git a/src/current/_includes/v22.1/cdc/note-changefeed-message-page.md b/src/current/_includes/v22.1/cdc/note-changefeed-message-page.md deleted file mode 100644 index d61d4299b43..00000000000 --- a/src/current/_includes/v22.1/cdc/note-changefeed-message-page.md +++ /dev/null @@ -1 +0,0 @@ -For an overview of the messages emitted from changefeeds, see the [Changefeed Messages](changefeed-messages.html) page. \ No newline at end of file diff --git a/src/current/_includes/v22.1/cdc/options-table-note.md b/src/current/_includes/v22.1/cdc/options-table-note.md deleted file mode 100644 index 61a27aefcc0..00000000000 --- a/src/current/_includes/v22.1/cdc/options-table-note.md +++ /dev/null @@ -1 +0,0 @@ -This table shows the parameters for changefeeds to a specific sink. The `CREATE CHANGEFEED` page provides a list of all the available [options](create-changefeed.html#options). diff --git a/src/current/_includes/v22.1/cdc/print-key.md b/src/current/_includes/v22.1/cdc/print-key.md deleted file mode 100644 index ab0b0924d30..00000000000 --- a/src/current/_includes/v22.1/cdc/print-key.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -This example only prints the value. To print both the key and value of each message in the changefeed (e.g., to observe what happens with `DELETE`s), use the `--property print.key=true` flag. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/cdc/schema-registry-timeout.md b/src/current/_includes/v22.1/cdc/schema-registry-timeout.md deleted file mode 100644 index a6571084ef3..00000000000 --- a/src/current/_includes/v22.1/cdc/schema-registry-timeout.md +++ /dev/null @@ -1 +0,0 @@ -Use the {% if page.name == "create-changefeed.md" %} `timeout={duration}` query parameter {% else %} [`timeout={duration}` query parameter](create-changefeed.html#confluent-registry) {% endif %}([duration string](https://pkg.go.dev/time#ParseDuration)) in your Confluent Schema Registry URI to change the default timeout for contacting the schema registry. By default, the timeout is 30 seconds. \ No newline at end of file diff --git a/src/current/_includes/v22.1/cdc/sql-cluster-settings-example.md b/src/current/_includes/v22.1/cdc/sql-cluster-settings-example.md deleted file mode 100644 index e3e1025135a..00000000000 --- a/src/current/_includes/v22.1/cdc/sql-cluster-settings-example.md +++ /dev/null @@ -1,27 +0,0 @@ -1. As the `root` user, open the [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - -1. Set your organization name and [{{ site.data.products.enterprise }} license](enterprise-licensing.html) key that you received via email: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING cluster.organization = ''; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING enterprise.license = ''; - ~~~ - -1. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING kv.rangefeed.enabled = true; - ~~~ - - {% include {{ page.version.version }}/cdc/cdc-cloud-rangefeed.md %} diff --git a/src/current/_includes/v22.1/cdc/url-encoding.md b/src/current/_includes/v22.1/cdc/url-encoding.md deleted file mode 100644 index 2a681d7f913..00000000000 --- a/src/current/_includes/v22.1/cdc/url-encoding.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -Parameters should always be URI-encoded before they are included the changefeed's URI, as they often contain special characters. Use Javascript's [encodeURIComponent](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent) function or Go language's [url.QueryEscape](https://golang.org/pkg/net/url/#QueryEscape) function to URI-encode the parameters. Other languages provide similar functions to URI-encode special characters. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/cdc/virtual-computed-column-cdc.md b/src/current/_includes/v22.1/cdc/virtual-computed-column-cdc.md deleted file mode 100644 index cf1267c5206..00000000000 --- a/src/current/_includes/v22.1/cdc/virtual-computed-column-cdc.md +++ /dev/null @@ -1 +0,0 @@ -As of v22.1, changefeeds filter out [`VIRTUAL` computed columns](computed-columns.html) from events by default. This is a [backward-incompatible change](../releases/v22.1.html#v22-1-0-backward-incompatible-changes). To maintain the changefeed behavior in previous versions where [`NULL`](null-handling.html) values are emitted for virtual computed columns, see the [`virtual_columns`](create-changefeed.html#virtual-columns) option for more detail. diff --git a/src/current/_includes/v22.1/cdc/webhook-beta.md b/src/current/_includes/v22.1/cdc/webhook-beta.md deleted file mode 100644 index c1e0447742e..00000000000 --- a/src/current/_includes/v22.1/cdc/webhook-beta.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -The webhook sink is currently in **beta** — see [usage considerations](../{{ page.version.version }}/changefeed-sinks.html#webhook-sink), available [parameters](../{{ page.version.version }}/create-changefeed.html#parameters), and [options](../{{ page.version.version }}/create-changefeed.html#options) for more information. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/client-transaction-retry.md b/src/current/_includes/v22.1/client-transaction-retry.md deleted file mode 100644 index 2cae1347a18..00000000000 --- a/src/current/_includes/v22.1/client-transaction-retry.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -With the default `SERIALIZABLE` [isolation level](transactions.html#isolation-levels), CockroachDB may require the client to [retry a transaction](transactions.html#transaction-retries) in case of read/write [contention]({{ link_prefix }}performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention). CockroachDB provides a [generic retry function](transactions.html#client-side-intervention) that runs inside a transaction and retries it as needed. The code sample below shows how it is used. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/computed-columns/add-computed-column.md b/src/current/_includes/v22.1/computed-columns/add-computed-column.md deleted file mode 100644 index 5eff580e575..00000000000 --- a/src/current/_includes/v22.1/computed-columns/add-computed-column.md +++ /dev/null @@ -1,55 +0,0 @@ -In this example, create a table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE x ( - a INT NULL, - b INT NULL AS (a * 2) STORED, - c INT NULL AS (a + 4) STORED, - FAMILY "primary" (a, b, rowid, c) - ); -~~~ - -Then, insert a row of data: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO x VALUES (6); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM x; -~~~ - -~~~ -+---+----+----+ -| a | b | c | -+---+----+----+ -| 6 | 12 | 10 | -+---+----+----+ -(1 row) -~~~ - -Now add another virtual computed column to the table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE x ADD COLUMN d INT AS (a // 2) VIRTUAL; -~~~ - -The `d` column is added to the table and computed from the `a` column divided by 2. - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM x; -~~~ - -~~~ -+---+----+----+---+ -| a | b | c | d | -+---+----+----+---+ -| 6 | 12 | 10 | 3 | -+---+----+----+---+ -(1 row) -~~~ diff --git a/src/current/_includes/v22.1/computed-columns/alter-computed-column.md b/src/current/_includes/v22.1/computed-columns/alter-computed-column.md deleted file mode 100644 index 0c554f1c630..00000000000 --- a/src/current/_includes/v22.1/computed-columns/alter-computed-column.md +++ /dev/null @@ -1,76 +0,0 @@ -To alter the formula for a computed column, you must [`DROP`](drop-column.html) and [`ADD`](add-column.html) the column back with the new definition. Take the following table for instance: - -{% include_cached copy-clipboard.html %} -~~~sql -> CREATE TABLE x ( -a INT NULL, -b INT NULL AS (a * 2) STORED, -c INT NULL AS (a + 4) STORED, -FAMILY "primary" (a, b, rowid, c) -); -~~~ -~~~ -CREATE TABLE - - -Time: 4ms total (execution 4ms / network 0ms) -~~~ - -Add a computed column `d`: - -{% include_cached copy-clipboard.html %} -~~~sql -> ALTER TABLE x ADD COLUMN d INT AS (a // 2) STORED; -~~~ -~~~ -ALTER TABLE - - -Time: 199ms total (execution 199ms / network 0ms) -~~~ - -If you try to alter it, you'll get an error: - -{% include_cached copy-clipboard.html %} -~~~sql -> ALTER TABLE x ALTER COLUMN d INT AS (a // 3) STORED; -~~~ -~~~ -invalid syntax: statement ignored: at or near "int": syntax error -SQLSTATE: 42601 -DETAIL: source SQL: -ALTER TABLE x ALTER COLUMN d INT AS (a // 3) STORED - ^ -HINT: try \h ALTER TABLE -~~~ - -However, you can drop it and then add it with the new definition: - -{% include_cached copy-clipboard.html %} -~~~sql -> SET sql_safe_updates = false; -> ALTER TABLE x DROP COLUMN d; -> ALTER TABLE x ADD COLUMN d INT AS (a // 3) STORED; -> SET sql_safe_updates = true; -~~~ -~~~ -SET - - -Time: 1ms total (execution 0ms / network 0ms) - -ALTER TABLE - - -Time: 195ms total (execution 195ms / network 0ms) - -ALTER TABLE - - -Time: 186ms total (execution 185ms / network 0ms) - -SET - - -Time: 0ms total (execution 0ms / network 0ms) -~~~ diff --git a/src/current/_includes/v22.1/computed-columns/convert-computed-column.md b/src/current/_includes/v22.1/computed-columns/convert-computed-column.md deleted file mode 100644 index 2c9897b8319..00000000000 --- a/src/current/_includes/v22.1/computed-columns/convert-computed-column.md +++ /dev/null @@ -1,108 +0,0 @@ -You can convert a stored, computed column into a regular column by using `ALTER TABLE`. - -In this example, create a simple table with a computed column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE office_dogs ( - id INT PRIMARY KEY, - first_name STRING, - last_name STRING, - full_name STRING AS (CONCAT(first_name, ' ', last_name)) STORED - ); -~~~ - -Then, insert a few rows of data: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO office_dogs (id, first_name, last_name) VALUES - (1, 'Petee', 'Hirata'), - (2, 'Carl', 'Kimball'), - (3, 'Ernie', 'Narayan'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM office_dogs; -~~~ - -~~~ -+----+------------+-----------+---------------+ -| id | first_name | last_name | full_name | -+----+------------+-----------+---------------+ -| 1 | Petee | Hirata | Petee Hirata | -| 2 | Carl | Kimball | Carl Kimball | -| 3 | Ernie | Narayan | Ernie Narayan | -+----+------------+-----------+---------------+ -(3 rows) -~~~ - -The `full_name` column is computed from the `first_name` and `last_name` columns without the need to define a [view](views.html). You can view the column details with the [`SHOW COLUMNS`](show-columns.html) statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM office_dogs; -~~~ - -~~~ -+-------------+-----------+-------------+----------------+------------------------------------+-------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------+-----------+-------------+----------------+------------------------------------+-------------+ -| id | INT | false | NULL | | {"primary"} | -| first_name | STRING | true | NULL | | {} | -| last_name | STRING | true | NULL | | {} | -| full_name | STRING | true | NULL | concat(first_name, ' ', last_name) | {} | -+-------------+-----------+-------------+----------------+------------------------------------+-------------+ -(4 rows) -~~~ - -Now, convert the computed column (`full_name`) to a regular column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE office_dogs ALTER COLUMN full_name DROP STORED; -~~~ - -Check that the computed column was converted: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM office_dogs; -~~~ - -~~~ -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -| id | INT | false | NULL | | {"primary"} | -| first_name | STRING | true | NULL | | {} | -| last_name | STRING | true | NULL | | {} | -| full_name | STRING | true | NULL | | {} | -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -(4 rows) -~~~ - -The computed column is now a regular column and can be updated as such: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO office_dogs (id, first_name, last_name, full_name) VALUES (4, 'Lola', 'McDog', 'This is not computed'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM office_dogs; -~~~ - -~~~ -+----+------------+-----------+----------------------+ -| id | first_name | last_name | full_name | -+----+------------+-----------+----------------------+ -| 1 | Petee | Hirata | Petee Hirata | -| 2 | Carl | Kimball | Carl Kimball | -| 3 | Ernie | Narayan | Ernie Narayan | -| 4 | Lola | McDog | This is not computed | -+----+------------+-----------+----------------------+ -(4 rows) -~~~ diff --git a/src/current/_includes/v22.1/computed-columns/jsonb.md b/src/current/_includes/v22.1/computed-columns/jsonb.md deleted file mode 100644 index 6b0ca92f80c..00000000000 --- a/src/current/_includes/v22.1/computed-columns/jsonb.md +++ /dev/null @@ -1,70 +0,0 @@ -In this example, create a table with a `JSONB` column and a stored computed column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE student_profiles ( - id STRING PRIMARY KEY AS (profile->>'id') STORED, - profile JSONB -); -~~~ - -Create a compute column after you create a table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE student_profiles ADD COLUMN age INT AS ( (profile->>'age')::INT) STORED; -~~~ - -Then, insert a few rows of data: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO student_profiles (profile) VALUES - ('{"id": "d78236", "name": "Arthur Read", "age": "16", "school": "PVPHS", "credits": 120, "sports": "none"}'), - ('{"name": "Buster Bunny", "age": "15", "id": "f98112", "school": "THS", "credits": 67, "clubs": "MUN"}'), - ('{"name": "Ernie Narayan", "school" : "Brooklyn Tech", "id": "t63512", "sports": "Track and Field", "clubs": "Chess"}'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM student_profiles; -~~~ -~~~ -+--------+---------------------------------------------------------------------------------------------------------------------+------+ -| id | profile | age | ----------+---------------------------------------------------------------------------------------------------------------------+------+ -| d78236 | {"age": "16", "credits": 120, "id": "d78236", "name": "Arthur Read", "school": "PVPHS", "sports": "none"} | 16 | -| f98112 | {"age": "15", "clubs": "MUN", "credits": 67, "id": "f98112", "name": "Buster Bunny", "school": "THS"} | 15 | -| t63512 | {"clubs": "Chess", "id": "t63512", "name": "Ernie Narayan", "school": "Brooklyn Tech", "sports": "Track and Field"} | NULL | -+--------+---------------------------------------------------------------------------------------------------------------------+------| -~~~ - -The primary key `id` is computed as a field from the `profile` column. Additionally the `age` column is computed from the profile column data as well. - -This example shows how add a stored computed column with a [coerced type](scalar-expressions.html#explicit-type-coercions): - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE json_data ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - json_info JSONB -); -INSERT INTO json_data (json_info) VALUES ('{"amount": "123.45"}'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER TABLE json_data ADD COLUMN amount DECIMAL AS ((json_info->>'amount')::DECIMAL) STORED; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT * FROM json_data; -~~~ - -~~~ - id | json_info | amount ----------------------------------------+----------------------+--------- - e7c3d706-1367-4d77-bfb4-386dfdeb10f9 | {"amount": "123.45"} | 123.45 -(1 row) -~~~ diff --git a/src/current/_includes/v22.1/computed-columns/secondary-index.md b/src/current/_includes/v22.1/computed-columns/secondary-index.md deleted file mode 100644 index 8b78325e695..00000000000 --- a/src/current/_includes/v22.1/computed-columns/secondary-index.md +++ /dev/null @@ -1,63 +0,0 @@ -In this example, create a table with a virtual computed column and an index on that column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE gymnastics ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - athlete STRING, - vault DECIMAL, - bars DECIMAL, - beam DECIMAL, - floor DECIMAL, - combined_score DECIMAL AS (vault + bars + beam + floor) VIRTUAL, - INDEX total (combined_score DESC) - ); -~~~ - -Then, insert a few rows a data: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO gymnastics (athlete, vault, bars, beam, floor) VALUES - ('Simone Biles', 15.933, 14.800, 15.300, 15.800), - ('Gabby Douglas', 0, 15.766, 0, 0), - ('Laurie Hernandez', 15.100, 0, 15.233, 14.833), - ('Madison Kocian', 0, 15.933, 0, 0), - ('Aly Raisman', 15.833, 0, 15.000, 15.366); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM gymnastics; -~~~ -~~~ -+--------------------------------------+------------------+--------+--------+--------+--------+----------------+ -| id | athlete | vault | bars | beam | floor | combined_score | -+--------------------------------------+------------------+--------+--------+--------+--------+----------------+ -| 3fe11371-6a6a-49de-bbef-a8dd16560fac | Aly Raisman | 15.833 | 0 | 15.000 | 15.366 | 46.199 | -| 56055a70-b4c7-4522-909b-8f3674b705e5 | Madison Kocian | 0 | 15.933 | 0 | 0 | 15.933 | -| 69f73fd1-da34-48bf-aff8-71296ce4c2c7 | Gabby Douglas | 0 | 15.766 | 0 | 0 | 15.766 | -| 8a7b730b-668d-4845-8d25-48bda25114d6 | Laurie Hernandez | 15.100 | 0 | 15.233 | 14.833 | 45.166 | -| b2c5ca80-21c2-4853-9178-b96ce220ea4d | Simone Biles | 15.933 | 14.800 | 15.300 | 15.800 | 61.833 | -+--------------------------------------+------------------+--------+--------+--------+--------+----------------+ -~~~ - -Now, run a query using the secondary index: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT athlete, combined_score FROM gymnastics ORDER BY combined_score DESC; -~~~ -~~~ -+------------------+----------------+ -| athlete | combined_score | -+------------------+----------------+ -| Simone Biles | 61.833 | -| Aly Raisman | 46.199 | -| Laurie Hernandez | 45.166 | -| Madison Kocian | 15.933 | -| Gabby Douglas | 15.766 | -+------------------+----------------+ -~~~ - -The athlete with the highest combined score of 61.833 is Simone Biles. diff --git a/src/current/_includes/v22.1/computed-columns/simple.md b/src/current/_includes/v22.1/computed-columns/simple.md deleted file mode 100644 index 24a86a59481..00000000000 --- a/src/current/_includes/v22.1/computed-columns/simple.md +++ /dev/null @@ -1,40 +0,0 @@ -In this example, let's create a simple table with a computed column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE users ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - city STRING, - first_name STRING, - last_name STRING, - full_name STRING AS (CONCAT(first_name, ' ', last_name)) STORED, - address STRING, - credit_card STRING, - dl STRING UNIQUE CHECK (LENGTH(dl) < 8) -); -~~~ - -Then, insert a few rows of data: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO users (first_name, last_name) VALUES - ('Lola', 'McDog'), - ('Carl', 'Kimball'), - ('Ernie', 'Narayan'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users; -~~~ -~~~ - id | city | first_name | last_name | full_name | address | credit_card | dl -+--------------------------------------+------+------------+-----------+---------------+---------+-------------+------+ - 5740da29-cc0c-47af-921c-b275d21d4c76 | NULL | Ernie | Narayan | Ernie Narayan | NULL | NULL | NULL - e7e0b748-9194-4d71-9343-cd65218848f0 | NULL | Lola | McDog | Lola McDog | NULL | NULL | NULL - f00e4715-8ca7-4d5a-8de5-ef1d5d8092f3 | NULL | Carl | Kimball | Carl Kimball | NULL | NULL | NULL -(3 rows) -~~~ - -The `full_name` column is computed from the `first_name` and `last_name` columns without the need to define a [view](views.html). diff --git a/src/current/_includes/v22.1/computed-columns/virtual.md b/src/current/_includes/v22.1/computed-columns/virtual.md deleted file mode 100644 index 7d873440328..00000000000 --- a/src/current/_includes/v22.1/computed-columns/virtual.md +++ /dev/null @@ -1,41 +0,0 @@ -In this example, create a table with a `JSONB` column and virtual computed columns: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE student_profiles ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - profile JSONB, - full_name STRING AS (concat_ws(' ',profile->>'firstName', profile->>'lastName')) VIRTUAL, - birthday TIMESTAMP AS (parse_timestamp(profile->>'birthdate')) VIRTUAL -); -~~~ - -Then, insert a few rows of data: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO student_profiles (profile) VALUES - ('{"id": "d78236", "firstName": "Arthur", "lastName": "Read", "birthdate": "2010-01-25", "school": "PVPHS", "credits": 120, "sports": "none"}'), - ('{"firstName": "Buster", "lastName": "Bunny", "birthdate": "2011-11-07", "id": "f98112", "school": "THS", "credits": 67, "clubs": "MUN"}'), - ('{"firstName": "Ernie", "lastName": "Narayan", "school" : "Brooklyn Tech", "id": "t63512", "sports": "Track and Field", "clubs": "Chess"}'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM student_profiles; -~~~ -~~~ - id | profile | full_name | birthday ----------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------+---------------+---------------------- - 0e420282-105d-473b-83e2-3b082e7033e4 | {"birthdate": "2011-11-07", "clubs": "MUN", "credits": 67, "firstName": "Buster", "id": "f98112", "lastName": "Bunny", "school": "THS"} | Buster Bunny | 2011-11-07 00:00:00 - 6e9b77cd-ec67-41ae-b346-7b3d89902c72 | {"birthdate": "2010-01-25", "credits": 120, "firstName": "Arthur", "id": "d78236", "lastName": "Read", "school": "PVPHS", "sports": "none"} | Arthur Read | 2010-01-25 00:00:00 - f74b21e3-dc1e-49b7-a648-3c9b9024a70f | {"clubs": "Chess", "firstName": "Ernie", "id": "t63512", "lastName": "Narayan", "school": "Brooklyn Tech", "sports": "Track and Field"} | Ernie Narayan | NULL -(3 rows) - - -Time: 2ms total (execution 2ms / network 0ms) -~~~ - -The virtual column `full_name` is computed as a field from the `profile` column's data. The first name and last name are concatenated and separated by a single whitespace character using the [`concat_ws` string function](functions-and-operators.html#string-and-byte-functions). - -The virtual column `birthday` is parsed as a `TIMESTAMP` value from the `profile` column's `birthdate` string value. The [`parse_timestamp` function](functions-and-operators.html) is used to parse strings in `TIMESTAMP` format. diff --git a/src/current/_includes/v22.1/connect/connection-url.md b/src/current/_includes/v22.1/connect/connection-url.md deleted file mode 100644 index ae994bb3047..00000000000 --- a/src/current/_includes/v22.1/connect/connection-url.md +++ /dev/null @@ -1,19 +0,0 @@ -
-Set a `DATABASE_URL` environment variable to your connection string. - -{% include_cached copy-clipboard.html %} -~~~ shell -export DATABASE_URL="{connection string}" -~~~ - -
- -
-Set a `DATABASE_URL` environment variable to your connection string. - -{% include_cached copy-clipboard.html %} -~~~ shell -$env:DATABASE_URL = "{connection string}" -~~~ - -
\ No newline at end of file diff --git a/src/current/_includes/v22.1/connect/core-note.md b/src/current/_includes/v22.1/connect/core-note.md deleted file mode 100644 index 7b701cafb80..00000000000 --- a/src/current/_includes/v22.1/connect/core-note.md +++ /dev/null @@ -1,7 +0,0 @@ -{{site.data.alerts.callout_info}} -The connection information shown on this page uses [client certificate and key authentication]({% link {{ page.version.version }}/authentication.md %}#client-authentication) to connect to a secure, CockroachDB {{ site.data.products.core }} cluster. - -To connect to a CockroachDB {{ site.data.products.core }} cluster with client certificate and key authentication, you must first [generate server and client certificates]({% link {{ page.version.version }}/authentication.md %}#using-digital-certificates-with-cockroachdb). - -For instructions on starting a secure cluster, see [Start a Local Cluster (Secure)]({% link {{ page.version.version }}/secure-a-cluster.md %}). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/connect/jdbc-connection-url.md b/src/current/_includes/v22.1/connect/jdbc-connection-url.md deleted file mode 100644 index c055a390b4e..00000000000 --- a/src/current/_includes/v22.1/connect/jdbc-connection-url.md +++ /dev/null @@ -1,19 +0,0 @@ -Set a `JDBC_DATABASE_URL` environment variable to your JDBC connection string. - -
- -{% include_cached copy-clipboard.html %} -~~~ shell -export JDBC_DATABASE_URL="{connection string}" -~~~ - -
- -
- -{% include_cached copy-clipboard.html %} -~~~ shell -$env:JDBC_DATABASE_URL = "{connection string}" -~~~ - -
diff --git a/src/current/_includes/v22.1/core-note.md b/src/current/_includes/v22.1/core-note.md deleted file mode 100644 index 7b701cafb80..00000000000 --- a/src/current/_includes/v22.1/core-note.md +++ /dev/null @@ -1,7 +0,0 @@ -{{site.data.alerts.callout_info}} -The connection information shown on this page uses [client certificate and key authentication]({% link {{ page.version.version }}/authentication.md %}#client-authentication) to connect to a secure, CockroachDB {{ site.data.products.core }} cluster. - -To connect to a CockroachDB {{ site.data.products.core }} cluster with client certificate and key authentication, you must first [generate server and client certificates]({% link {{ page.version.version }}/authentication.md %}#using-digital-certificates-with-cockroachdb). - -For instructions on starting a secure cluster, see [Start a Local Cluster (Secure)]({% link {{ page.version.version }}/secure-a-cluster.md %}). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/dedicated-pci-compliance.md b/src/current/_includes/v22.1/dedicated-pci-compliance.md deleted file mode 100644 index 97fa54068c7..00000000000 --- a/src/current/_includes/v22.1/dedicated-pci-compliance.md +++ /dev/null @@ -1,7 +0,0 @@ -{{site.data.alerts.callout_info}} -CockroachDB {{ site.data.products.dedicated }} clusters comply with the Payment Card Industry Data Security Standard (PCI DSS). Compliance is certified by a PCI Qualified Security Assessor (QSA). - -To achieve compliance with PCI DSS on a CockroachDB {{ site.data.products.dedicated }} cluster, you must ensure that any information related to payments or other personally-identifiable information (PII) is encrypted, tokenized, or masked before being written to CockroachDB. You can implement this data protection from within the customer application or through a third-party intermediary solution such as [Satori](https://satoricyber.com/). - -To learn more about achieving PCI DSS compliance with CockroachDB {{ site.data.products.dedicated }}, contact your Cockroach Labs account team. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/demo_movr.md b/src/current/_includes/v22.1/demo_movr.md deleted file mode 100644 index cde6c211213..00000000000 --- a/src/current/_includes/v22.1/demo_movr.md +++ /dev/null @@ -1,10 +0,0 @@ -Start the [MovR database](movr.html) on a 3-node CockroachDB demo cluster with a larger data set. - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach demo movr --num-histories 250000 --num-promo-codes 250000 --num-rides 125000 --num-users 12500 --num-vehicles 3750 --nodes 3 -~~~ - -{% comment %} -This is a test -{% endcomment %} diff --git a/src/current/_includes/v22.1/faq/auto-generate-unique-ids.html b/src/current/_includes/v22.1/faq/auto-generate-unique-ids.html deleted file mode 100644 index ee56e21b7e0..00000000000 --- a/src/current/_includes/v22.1/faq/auto-generate-unique-ids.html +++ /dev/null @@ -1,109 +0,0 @@ -To auto-generate unique row identifiers, use the [`UUID`](uuid.html) column with the `gen_random_uuid()` [function](functions-and-operators.html#id-generation-functions) as the [default value](default-value.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE users ( - id UUID NOT NULL DEFAULT gen_random_uuid(), - city STRING NOT NULL, - name STRING NULL, - address STRING NULL, - credit_card STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - FAMILY "primary" (id, city, name, address, credit_card) -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO users (name, city) VALUES ('Petee', 'new york'), ('Eric', 'seattle'), ('Dan', 'seattle'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users; -~~~ - -~~~ - id | city | name | address | credit_card -+--------------------------------------+----------+-------+---------+-------------+ - cf8ee4e2-cd74-449a-b6e6-a0fb2017baa4 | new york | Petee | NULL | NULL - 2382564e-702f-42d9-a139-b6df535ae00a | seattle | Eric | NULL | NULL - 7d27e40b-263a-4891-b29b-d59135e55650 | seattle | Dan | NULL | NULL -(3 rows) -~~~ - -Alternatively, you can use the [`BYTES`](bytes.html) column with the `uuid_v4()` function as the default value instead: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE users2 ( - id BYTES DEFAULT uuid_v4(), - city STRING NOT NULL, - name STRING NULL, - address STRING NULL, - credit_card STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - FAMILY "primary" (id, city, name, address, credit_card) -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO users2 (name, city) VALUES ('Anna', 'new york'), ('Jonah', 'seattle'), ('Terry', 'chicago'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users; -~~~ - -~~~ - id | city | name | address | credit_card -+------------------------------------------------+----------+-------+---------+-------------+ - 4\244\277\323/\261M\007\213\275*\0060\346\025z | chicago | Terry | NULL | NULL - \273*t=u.F\010\274f/}\313\332\373a | new york | Anna | NULL | NULL - \004\\\364nP\024L)\252\364\222r$\274O0 | seattle | Jonah | NULL | NULL -(3 rows) -~~~ - -In either case, generated IDs will be 128-bit, large enough for there to be virtually no chance of generating non-unique values. Also, once the table grows beyond a single key-value range (more than 512 MiB by default), new IDs will be scattered across all of the table's ranges and, therefore, likely across different nodes. This means that multiple nodes will share in the load. - -This approach has the disadvantage of creating a primary key that may not be useful in a query directly, which can require a join with another table or a secondary index. - -If it is important for generated IDs to be stored in the same key-value range, you can use an [integer type](int.html) with the `unique_rowid()` [function](functions-and-operators.html#id-generation-functions) as the default value, either explicitly or via the [`SERIAL` pseudo-type](serial.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE users3 ( - id INT DEFAULT unique_rowid(), - city STRING NOT NULL, - name STRING NULL, - address STRING NULL, - credit_card STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - FAMILY "primary" (id, city, name, address, credit_card) -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO users3 (name, city) VALUES ('Blake', 'chicago'), ('Hannah', 'seattle'), ('Bobby', 'seattle'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users3; -~~~ - -~~~ - id | city | name | address | credit_card -+--------------------+---------+--------+---------+-------------+ - 469048192112197633 | chicago | Blake | NULL | NULL - 469048192112263169 | seattle | Hannah | NULL | NULL - 469048192112295937 | seattle | Bobby | NULL | NULL -(3 rows) -~~~ - -Upon insert or upsert, the `unique_rowid()` function generates a default value from the timestamp and ID of the node executing the insert. Such time-ordered values are likely to be globally unique except in cases where a very large number of IDs (100,000+) are generated per node per second. Also, there can be gaps and the order is not completely guaranteed. - -For further background on UUIDs, see [What is a UUID, and Why Should You Care?](https://www.cockroachlabs.com/blog/what-is-a-uuid/). diff --git a/src/current/_includes/v22.1/faq/clock-synchronization-effects.md b/src/current/_includes/v22.1/faq/clock-synchronization-effects.md deleted file mode 100644 index 8e749ba39c7..00000000000 --- a/src/current/_includes/v22.1/faq/clock-synchronization-effects.md +++ /dev/null @@ -1,27 +0,0 @@ -CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed, it spontaneously shuts down. This offset defaults to 500ms but can be changed via the [`--max-offset`](cockroach-start.html#flags-max-offset) flag when starting each node. - -While [serializable consistency](https://en.wikipedia.org/wiki/Serializability) is maintained regardless of clock skew, skew outside the configured clock offset bounds can result in violations of single-key linearizability between causally dependent transactions. It's therefore important to prevent clocks from drifting too far by running [NTP](http://www.ntp.org/) or other clock synchronization software on each node. - -In very rare cases, CockroachDB can momentarily run with a stale clock. This can happen when using vMotion, which can suspend a VM running CockroachDB, migrate it to different hardware, and resume it. This will cause CockroachDB to be out of sync for a short period before it jumps to the correct time. During this window, it would be possible for a client to read stale data and write data derived from stale reads. By enabling the `server.clock.forward_jump_check_enabled` [cluster setting](cluster-settings.html), you can be alerted when the CockroachDB clock jumps forward, indicating it had been running with a stale clock. To protect against this on vMotion, however, use the [`--clock-device`](cockroach-start.html#general) flag to specify a [PTP hardware clock](https://www.kernel.org/doc/html/latest/driver-api/ptp.html) for CockroachDB to use when querying the current time. When doing so, you should not enable `server.clock.forward_jump_check_enabled` because forward jumps will be expected and harmless. For more information on how `--clock-device` interacts with vMotion, see [this blog post](https://core.vmware.com/blog/cockroachdb-vmotion-support-vsphere-7-using-precise-timekeeping). - -### Considerations - -When setting up clock synchronization: - -- All nodes in the cluster must be synced to the same time source, or to different sources that implement leap second smearing in the same way. For example, Google and Amazon have time sources that are compatible with each other (they implement [leap second smearing](https://developers.google.com/time/smear) in the same way), but are incompatible with the default NTP pool (which does not implement leap second smearing). -- For nodes running in AWS, we recommend [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). For nodes running in GCP, we recommend [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/configure-ntp#configure_ntp_for_your_instances). For nodes running elsewhere, we recommend [Google Public NTP](https://developers.google.com/time/). Note that the Google and Amazon time services can be mixed with each other, but they cannot be mixed with other time services (unless you have verified leap second behavior). Either all of your nodes should use the Google and Amazon services, or none of them should. -- If you do not want to use the Google or Amazon time sources, you can use [`chrony`](https://chrony.tuxfamily.org/index.html) and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles smearing the leap second. If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine. -- Do not run more than one clock sync service on VMs where `cockroach` is running. -- {% include {{ page.version.version }}/misc/multiregion-max-offset.md %} - -### Tutorials - -For guidance on synchronizing clocks, see the tutorial for your deployment environment: - -Environment | Featured Approach -------------|--------------------- -[On-Premises](deploy-cockroachdb-on-premises.html#step-1-synchronize-clocks) | Use NTP with Google's external NTP service. -[AWS](deploy-cockroachdb-on-aws.html#step-3-synchronize-clocks) | Use the Amazon Time Sync Service. -[Azure](deploy-cockroachdb-on-microsoft-azure.html#step-3-synchronize-clocks) | Disable Hyper-V time synchronization and use NTP with Google's external NTP service. -[Digital Ocean](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks) | Use NTP with Google's external NTP service. -[GCE](deploy-cockroachdb-on-google-cloud-platform.html#step-3-synchronize-clocks) | Use NTP with Google's internal NTP service. diff --git a/src/current/_includes/v22.1/faq/clock-synchronization-monitoring.html b/src/current/_includes/v22.1/faq/clock-synchronization-monitoring.html deleted file mode 100644 index 7fb82e4d188..00000000000 --- a/src/current/_includes/v22.1/faq/clock-synchronization-monitoring.html +++ /dev/null @@ -1,8 +0,0 @@ -As explained in more detail [in our monitoring documentation](monitoring-and-alerting.html#prometheus-endpoint), each CockroachDB node exports a wide variety of metrics at `http://:/_status/vars` in the format used by the popular Prometheus timeseries database. Two of these metrics export how close each node's clock is to the clock of all other nodes: - -Metric | Definition --------|----------- -`clock_offset_meannanos` | The mean difference between the node's clock and other nodes' clocks in nanoseconds -`clock_offset_stddevnanos` | The standard deviation of the difference between the node's clock and other nodes' clocks in nanoseconds - -As described in [the above answer](#what-happens-when-node-clocks-are-not-properly-synchronized), a node will shut down if the mean offset of its clock from the other nodes' clocks exceeds 80% of the maximum offset allowed. It's recommended to monitor the `clock_offset_meannanos` metric and alert if it's approaching the 80% threshold of your cluster's configured max offset. diff --git a/src/current/_includes/v22.1/faq/differences-between-numberings.md b/src/current/_includes/v22.1/faq/differences-between-numberings.md deleted file mode 100644 index 80f7fe26d50..00000000000 --- a/src/current/_includes/v22.1/faq/differences-between-numberings.md +++ /dev/null @@ -1,11 +0,0 @@ - -| Property | UUID generated with `uuid_v4()` | INT generated with `unique_rowid()` | Sequences | -|--------------------------------------|-----------------------------------------|-----------------------------------------------|--------------------------------| -| Size | 16 bytes | 8 bytes | 1 to 8 bytes | -| Ordering properties | Unordered | Highly time-ordered | Highly time-ordered | -| Performance cost at generation | Small, scalable | Small, scalable | Variable, can cause [contention]({{ link_prefix }}performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention) | -| Value distribution | Uniformly distributed (128 bits) | Contains time and space (node ID) components | Dense, small values | -| Data locality | Maximally distributed | Values generated close in time are co-located | Highly local | -| `INSERT` latency when used as key | Small, insensitive to concurrency | Small, but increases with concurrent INSERTs | Higher | -| `INSERT` throughput when used as key | Highest | Limited by max throughput on 1 node | Limited by max throughput on 1 node | -| Read throughput when used as key | Highest (maximal parallelism) | Limited | Limited | diff --git a/src/current/_includes/v22.1/faq/sequential-numbers.md b/src/current/_includes/v22.1/faq/sequential-numbers.md deleted file mode 100644 index 0290c042060..00000000000 --- a/src/current/_includes/v22.1/faq/sequential-numbers.md +++ /dev/null @@ -1,8 +0,0 @@ -Sequential numbers can be generated in CockroachDB using the `unique_rowid()` built-in function or using [SQL sequences](create-sequence.html). However, note the following considerations: - -- Unless you need roughly-ordered numbers, use [`UUID`](uuid.html) values instead. See the [previous -FAQ](#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) for details. -- [Sequences](create-sequence.html) produce **unique** values. However, not all values are guaranteed to be produced (e.g., when a transaction is canceled after it consumes a value) and the values may be slightly reordered (e.g., when a transaction that -consumes a lower sequence number commits after a transaction that consumes a higher number). -- For maximum performance, avoid using sequences or `unique_rowid()` to generate row IDs or indexed columns. Values generated in these ways are logically close to each other and can cause [contention]({{ link_prefix }}performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention) on a few data ranges during inserts. Instead, prefer [`UUID`](uuid.html) identifiers. -- {% include {{page.version.version}}/performance/use-hash-sharded-indexes.md %} diff --git a/src/current/_includes/v22.1/faq/sequential-transactions.md b/src/current/_includes/v22.1/faq/sequential-transactions.md deleted file mode 100644 index 684f2ce5d2a..00000000000 --- a/src/current/_includes/v22.1/faq/sequential-transactions.md +++ /dev/null @@ -1,19 +0,0 @@ -Most use cases that ask for a strong time-based write ordering can be solved with other, more distribution-friendly -solutions instead. For example, CockroachDB's [time travel queries (`AS OF SYSTEM -TIME`)](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/) support the following: - -- Paginating through all the changes to a table or dataset -- Determining the order of changes to data over time -- Determining the state of data at some point in the past -- Determining the changes to data between two points of time - -Consider also that the values generated by `unique_rowid()`, described in the previous FAQ entries, also provide an approximate time ordering. - -However, if your application absolutely requires strong time-based write ordering, it is possible to create a strictly monotonic counter in CockroachDB that increases over time as follows: - -- Initially: `CREATE TABLE cnt(val INT PRIMARY KEY); INSERT INTO cnt(val) VALUES(1);` -- In each transaction: `INSERT INTO cnt(val) SELECT max(val)+1 FROM cnt RETURNING val;` - -This will cause [`INSERT`](insert.html) transactions to conflict with each other and effectively force the transactions to commit one at a time throughout the cluster, which in turn guarantees the values generated in this way are strictly increasing over time without gaps. The caveat is that performance is severely limited as a result. - -If you find yourself interested in this problem, please [contact us](support-resources.html) and describe your situation. We would be glad to help you find alternative solutions and possibly extend CockroachDB to better match your needs. diff --git a/src/current/_includes/v22.1/faq/simulate-key-value-store.html b/src/current/_includes/v22.1/faq/simulate-key-value-store.html deleted file mode 100644 index 4772fa5358c..00000000000 --- a/src/current/_includes/v22.1/faq/simulate-key-value-store.html +++ /dev/null @@ -1,13 +0,0 @@ -CockroachDB is a distributed SQL database built on a transactional and strongly-consistent key-value store. Although it is not possible to access the key-value store directly, you can mirror direct access using a "simple" table of two columns, with one set as the primary key: - -~~~ sql -> CREATE TABLE kv (k INT PRIMARY KEY, v BYTES); -~~~ - -When such a "simple" table has no indexes or foreign keys, [`INSERT`](insert.html)/[`UPSERT`](upsert.html)/[`UPDATE`](update.html)/[`DELETE`](delete.html) statements translate to key-value operations with minimal overhead (single digit percent slowdowns). For example, the following `UPSERT` to add or replace a row in the table would translate into a single key-value Put operation: - -~~~ sql -> UPSERT INTO kv VALUES (1, b'hello') -~~~ - -This SQL table approach also offers you a well-defined query language, a known transaction model, and the flexibility to add more columns to the table if the need arises. diff --git a/src/current/_includes/v22.1/faq/what-is-crdb.md b/src/current/_includes/v22.1/faq/what-is-crdb.md deleted file mode 100644 index 28857ed61fa..00000000000 --- a/src/current/_includes/v22.1/faq/what-is-crdb.md +++ /dev/null @@ -1,7 +0,0 @@ -CockroachDB is a [distributed SQL](https://www.cockroachlabs.com/blog/what-is-distributed-sql/) database built on a transactional and strongly-consistent key-value store. It **scales** horizontally; **survives** disk, machine, rack, and even datacenter failures with minimal latency disruption and no manual intervention; supports **strongly-consistent** ACID transactions; and provides a familiar **SQL** API for structuring, manipulating, and querying data. - -CockroachDB is inspired by Google's [Spanner](http://research.google.com/archive/spanner.html) and [F1](http://research.google.com/pubs/pub38125.html) technologies, and the [source code](https://github.com/cockroachdb/cockroach) is freely available. - -{{site.data.alerts.callout_success}} -For a deeper dive into CockroachDB's capabilities and how it fits into the database landscape, take the free [**Intro to Distributed SQL and CockroachDB**](https://university.cockroachlabs.com/courses/course-v1:crl+intro-to-distributed-sql-and-cockroachdb+self-paced/about) course on Cockroach University. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v22.1/filter-tabs/crdb-kubernetes.md b/src/current/_includes/v22.1/filter-tabs/crdb-kubernetes.md deleted file mode 100644 index db7f18ff324..00000000000 --- a/src/current/_includes/v22.1/filter-tabs/crdb-kubernetes.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Secure;Insecure" %} -{% assign html_page_filenames = "orchestrate-a-local-cluster-with-kubernetes.html;orchestrate-a-local-cluster-with-kubernetes-insecure.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v22.1/filter-tabs/crdb-single-kubernetes.md b/src/current/_includes/v22.1/filter-tabs/crdb-single-kubernetes.md deleted file mode 100644 index 409bdc1855c..00000000000 --- a/src/current/_includes/v22.1/filter-tabs/crdb-single-kubernetes.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Secure;Insecure" %} -{% assign html_page_filenames = "deploy-cockroachdb-with-kubernetes.html;deploy-cockroachdb-with-kubernetes-insecure.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v22.1/filter-tabs/crud-go.md b/src/current/_includes/v22.1/filter-tabs/crud-go.md deleted file mode 100644 index a69d0e4435c..00000000000 --- a/src/current/_includes/v22.1/filter-tabs/crud-go.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Use pgx;Use GORM;Use lib/pq;Use upper/db" %} -{% assign html_page_filenames = "build-a-go-app-with-cockroachdb.html;build-a-go-app-with-cockroachdb-gorm.html;build-a-go-app-with-cockroachdb-pq.html;build-a-go-app-with-cockroachdb-upperdb.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v22.1/filter-tabs/crud-java.md b/src/current/_includes/v22.1/filter-tabs/crud-java.md deleted file mode 100644 index 5cbdf749e09..00000000000 --- a/src/current/_includes/v22.1/filter-tabs/crud-java.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Use JDBC;Use Hibernate;Use jOOQ;Use MyBatis-Spring" %} -{% assign html_page_filenames = "build-a-java-app-with-cockroachdb.html;build-a-java-app-with-cockroachdb-hibernate.html;build-a-java-app-with-cockroachdb-jooq.html;build-a-spring-app-with-cockroachdb-mybatis.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v22.1/filter-tabs/crud-js.md b/src/current/_includes/v22.1/filter-tabs/crud-js.md deleted file mode 100644 index bb319ed88c1..00000000000 --- a/src/current/_includes/v22.1/filter-tabs/crud-js.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Use node-postgres;Use Sequelize;Use Knex.js;Use Prisma;Use TypeORM" %} -{% assign html_page_filenames = "build-a-nodejs-app-with-cockroachdb.html;build-a-nodejs-app-with-cockroachdb-sequelize.html;build-a-nodejs-app-with-cockroachdb-knexjs.html;build-a-nodejs-app-with-cockroachdb-prisma.html;build-a-typescript-app-with-cockroachdb.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v22.1/filter-tabs/crud-python.md b/src/current/_includes/v22.1/filter-tabs/crud-python.md deleted file mode 100644 index cb4905591f0..00000000000 --- a/src/current/_includes/v22.1/filter-tabs/crud-python.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Use psycopg3;Use psycopg2;Use SQLAlchemy;Use Django;Use peewee" %} -{% assign html_page_filenames = "build-a-python-app-with-cockroachdb-psycopg3.html;build-a-python-app-with-cockroachdb.html;build-a-python-app-with-cockroachdb-sqlalchemy.html;build-a-python-app-with-cockroachdb-django.html;https://docs.peewee-orm.com/en/latest/peewee/playhouse.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v22.1/filter-tabs/crud-ruby.md b/src/current/_includes/v22.1/filter-tabs/crud-ruby.md deleted file mode 100644 index 5fc13aa697b..00000000000 --- a/src/current/_includes/v22.1/filter-tabs/crud-ruby.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Use pg;Use ActiveRecord" %} -{% assign html_page_filenames = "build-a-ruby-app-with-cockroachdb.html;build-a-ruby-app-with-cockroachdb-activerecord.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v22.1/filter-tabs/crud-spring.md b/src/current/_includes/v22.1/filter-tabs/crud-spring.md deleted file mode 100644 index bd4f66f19a7..00000000000 --- a/src/current/_includes/v22.1/filter-tabs/crud-spring.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Use JDBC;Use JPA" %} -{% assign html_page_filenames = "build-a-spring-app-with-cockroachdb-jdbc.html;build-a-spring-app-with-cockroachdb-jpa.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v22.1/filter-tabs/deploy-crdb-aws.md b/src/current/_includes/v22.1/filter-tabs/deploy-crdb-aws.md deleted file mode 100644 index 706e5d85b8f..00000000000 --- a/src/current/_includes/v22.1/filter-tabs/deploy-crdb-aws.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Secure;Insecure" %} -{% assign html_page_filenames = "deploy-cockroachdb-on-aws.html;deploy-cockroachdb-on-aws-insecure.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v22.1/filter-tabs/deploy-crdb-do.md b/src/current/_includes/v22.1/filter-tabs/deploy-crdb-do.md deleted file mode 100644 index 02e44afee30..00000000000 --- a/src/current/_includes/v22.1/filter-tabs/deploy-crdb-do.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Secure;Insecure" %} -{% assign html_page_filenames = "deploy-cockroachdb-on-digital-ocean.html;deploy-cockroachdb-on-digital-ocean-insecure.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v22.1/filter-tabs/deploy-crdb-gce.md b/src/current/_includes/v22.1/filter-tabs/deploy-crdb-gce.md deleted file mode 100644 index 5799dfec9f0..00000000000 --- a/src/current/_includes/v22.1/filter-tabs/deploy-crdb-gce.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Secure;Insecure" %} -{% assign html_page_filenames = "deploy-cockroachdb-on-google-cloud-platform.html;deploy-cockroachdb-on-google-cloud-platform-insecure.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v22.1/filter-tabs/deploy-crdb-ma.md b/src/current/_includes/v22.1/filter-tabs/deploy-crdb-ma.md deleted file mode 100644 index 3f1162b426c..00000000000 --- a/src/current/_includes/v22.1/filter-tabs/deploy-crdb-ma.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Secure;Insecure" %} -{% assign html_page_filenames = "deploy-cockroachdb-on-microsoft-azure.html;deploy-cockroachdb-on-microsoft-azure-insecure.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v22.1/filter-tabs/deploy-crdb-op.md b/src/current/_includes/v22.1/filter-tabs/deploy-crdb-op.md deleted file mode 100644 index fdf35c61162..00000000000 --- a/src/current/_includes/v22.1/filter-tabs/deploy-crdb-op.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Secure;Insecure" %} -{% assign html_page_filenames = "deploy-cockroachdb-on-premises.html;deploy-cockroachdb-on-premises-insecure.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v22.1/filter-tabs/perf-bench-tpc-c.md b/src/current/_includes/v22.1/filter-tabs/perf-bench-tpc-c.md deleted file mode 100644 index 1394f916add..00000000000 --- a/src/current/_includes/v22.1/filter-tabs/perf-bench-tpc-c.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Local;Local (Multi-Region);Small;Medium;Large" %} -{% assign html_page_filenames = "performance-benchmarking-with-tpcc-local.html;performance-benchmarking-with-tpcc-local-multiregion.html;performance-benchmarking-with-tpcc-small.html;performance-benchmarking-with-tpcc-medium.html;performance-benchmarking-with-tpcc-large.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v22.1/filter-tabs/security-cert.md b/src/current/_includes/v22.1/filter-tabs/security-cert.md deleted file mode 100644 index 0832e618021..00000000000 --- a/src/current/_includes/v22.1/filter-tabs/security-cert.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Use cockroach cert;Use OpenSSL;Use custom CA" %} -{% assign html_page_filenames = "cockroach-cert.html;create-security-certificates-openssl.html;create-security-certificates-custom-ca.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v22.1/filter-tabs/start-a-cluster.md b/src/current/_includes/v22.1/filter-tabs/start-a-cluster.md deleted file mode 100644 index 92a688078cb..00000000000 --- a/src/current/_includes/v22.1/filter-tabs/start-a-cluster.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Secure;Insecure" %} -{% assign html_page_filenames = "secure-a-cluster.html;start-a-local-cluster.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v22.1/import-table-deprecate.md b/src/current/_includes/v22.1/import-table-deprecate.md deleted file mode 100644 index a7a21c87f7e..00000000000 --- a/src/current/_includes/v22.1/import-table-deprecate.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -As of v22.1, certain `IMPORT TABLE` statements that defined the table schema inline are **not** supported. See [Import — Considerations](import.html#considerations) for more details. To import data into a new table, use [`CREATE TABLE`](create-table.html) followed by [`IMPORT INTO`](import-into.html). For an example, read [Import into a new table from a CSV file](import-into.html#import-into-a-new-table-from-a-csv-file). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/jdbc-connection-url.md b/src/current/_includes/v22.1/jdbc-connection-url.md deleted file mode 100644 index c055a390b4e..00000000000 --- a/src/current/_includes/v22.1/jdbc-connection-url.md +++ /dev/null @@ -1,19 +0,0 @@ -Set a `JDBC_DATABASE_URL` environment variable to your JDBC connection string. - -
- -{% include_cached copy-clipboard.html %} -~~~ shell -export JDBC_DATABASE_URL="{connection string}" -~~~ - -
- -
- -{% include_cached copy-clipboard.html %} -~~~ shell -$env:JDBC_DATABASE_URL = "{connection string}" -~~~ - -
diff --git a/src/current/_includes/v22.1/json/json-sample.go b/src/current/_includes/v22.1/json/json-sample.go deleted file mode 100644 index d5953a71ee2..00000000000 --- a/src/current/_includes/v22.1/json/json-sample.go +++ /dev/null @@ -1,79 +0,0 @@ -package main - -import ( - "database/sql" - "fmt" - "io/ioutil" - "net/http" - "time" - - _ "github.com/lib/pq" -) - -func main() { - db, err := sql.Open("postgres", "user=maxroach dbname=jsonb_test sslmode=disable port=26257") - if err != nil { - panic(err) - } - - // The Reddit API wants us to tell it where to start from. The first request - // we just say "null" to say "from the start", subsequent requests will use - // the value received from the last call. - after := "null" - - for i := 0; i < 41; i++ { - after, err = makeReq(db, after) - if err != nil { - panic(err) - } - // Reddit limits to 30 requests per minute, so do not do any more than that. - time.Sleep(2 * time.Second) - } -} - -func makeReq(db *sql.DB, after string) (string, error) { - // First, make a request to reddit using the appropriate "after" string. - client := &http.Client{} - req, err := http.NewRequest("GET", fmt.Sprintf("https://www.reddit.com/r/programming.json?after=%s", after), nil) - - req.Header.Add("User-Agent", `Go`) - - resp, err := client.Do(req) - if err != nil { - return "", err - } - - res, err := ioutil.ReadAll(resp.Body) - if err != nil { - return "", err - } - - // We've gotten back our JSON from reddit, we can use a couple SQL tricks to - // accomplish multiple things at once. - // The JSON reddit returns looks like this: - // { - // "data": { - // "children": [ ... ] - // }, - // "after": ... - // } - // We structure our query so that we extract the `children` field, and then - // expand that and insert each individual element into the database as a - // separate row. We then return the "after" field so we know how to make the - // next request. - r, err := db.Query(` - INSERT INTO jsonb_test.programming (posts) - SELECT json_array_elements($1->'data'->'children') - RETURNING $1->'data'->'after'`, - string(res)) - if err != nil { - return "", err - } - - // Since we did a RETURNING, we need to grab the result of our query. - r.Next() - var newAfter string - r.Scan(&newAfter) - - return newAfter, nil -} diff --git a/src/current/_includes/v22.1/json/json-sample.py b/src/current/_includes/v22.1/json/json-sample.py deleted file mode 100644 index 49e302613e0..00000000000 --- a/src/current/_includes/v22.1/json/json-sample.py +++ /dev/null @@ -1,44 +0,0 @@ -import json -import psycopg2 -import requests -import time - -conn = psycopg2.connect(database="jsonb_test", user="maxroach", host="localhost", port=26257) -conn.set_session(autocommit=True) -cur = conn.cursor() - -# The Reddit API wants us to tell it where to start from. The first request -# we just say "null" to say "from the start"; subsequent requests will use -# the value received from the last call. -url = "https://www.reddit.com/r/programming.json" -after = {"after": "null"} - -for n in range(41): - # First, make a request to reddit using the appropriate "after" string. - req = requests.get(url, params=after, headers={"User-Agent": "Python"}) - - # Decode the JSON and set "after" for the next request. - resp = req.json() - after = {"after": str(resp['data']['after'])} - - # Convert the JSON to a string to send to the database. - data = json.dumps(resp) - - # The JSON reddit returns looks like this: - # { - # "data": { - # "children": [ ... ] - # }, - # "after": ... - # } - # We structure our query so that we extract the `children` field, and then - # expand that and insert each individual element into the database as a - # separate row. - cur.execute("""INSERT INTO jsonb_test.programming (posts) - SELECT json_array_elements(%s->'data'->'children')""", (data,)) - - # Reddit limits to 30 requests per minute, so do not do any more than that. - time.sleep(2) - -cur.close() -conn.close() diff --git a/src/current/_includes/v22.1/known-limitations/cdc.md b/src/current/_includes/v22.1/known-limitations/cdc.md deleted file mode 100644 index 8083b4c61ff..00000000000 --- a/src/current/_includes/v22.1/known-limitations/cdc.md +++ /dev/null @@ -1,8 +0,0 @@ -- Changefeeds cannot be [backed up](backup.html) or [restored](restore.html). [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73434) -- Changefeed target options are limited to tables and [column families](changefeeds-on-tables-with-column-families.html). [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73435) -- Using a [cloud storage sink](changefeed-sinks.html#cloud-storage-sink) only works with `JSON` and emits [newline-delimited json](http://ndjson.org) files. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73432) -- Webhook sinks only support HTTPS. Use the [`insecure_tls_skip_verify`](create-changefeed.html#tls-skip-verify) parameter when testing to disable certificate verification; however, this still requires HTTPS and certificates. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73431) -- [Webhook sinks](changefeed-sinks.html#webhook-sink) and [Google Cloud Pub/Sub sinks](changefeed-sinks.html#google-cloud-pub-sub) only have support for emitting `JSON`. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73432) -- There is no concurrency configurability for [webhook sinks](changefeed-sinks.html#webhook-sink). [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73430) -- Using the [`split_column_families`](create-changefeed.html#split-column-families) and [`resolved`](create-changefeed.html#resolved-option) options on the same changefeed will cause an error when using the following [sinks](changefeed-sinks.html): Kafka and Google Cloud Pub/Sub. Instead, use the individual `FAMILY` keyword to specify column families when creating a changefeed. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/79452) -- There is no configuration for unordered messages for [Google Cloud Pub/Sub sinks](changefeed-sinks.html#google-cloud-pub-sub). You must specify the `region` parameter in the URI to maintain [ordering guarantees](changefeed-messages.html#ordering-guarantees). [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/80884) diff --git a/src/current/_includes/v22.1/known-limitations/copy-syntax.md b/src/current/_includes/v22.1/known-limitations/copy-syntax.md deleted file mode 100644 index 36b57030e9b..00000000000 --- a/src/current/_includes/v22.1/known-limitations/copy-syntax.md +++ /dev/null @@ -1,13 +0,0 @@ -CockroachDB does not yet support the following `COPY` syntax: - -- `COPY ... TO`. To copy data from a CockroachDB cluster to a file, use an [`EXPORT`](export.html) statement. - - [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/41608) - -- Various unsupported `COPY` options (`FORMAT`, `FREEZE`, etc.) - - [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/41608) - -- `COPY ... FROM ... WHERE ` - - [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/54580) diff --git a/src/current/_includes/v22.1/known-limitations/drop-single-partition.md b/src/current/_includes/v22.1/known-limitations/drop-single-partition.md deleted file mode 100644 index 3d8166fdc04..00000000000 --- a/src/current/_includes/v22.1/known-limitations/drop-single-partition.md +++ /dev/null @@ -1 +0,0 @@ -CockroachDB does not currently support dropping a single partition from a table. In order to remove partitions, you can [repartition]({% unless page.name == "partitioning.md" %}partitioning.html{% endunless %}#repartition-a-table) the table. diff --git a/src/current/_includes/v22.1/known-limitations/drop-unique-index-from-create-table.md b/src/current/_includes/v22.1/known-limitations/drop-unique-index-from-create-table.md deleted file mode 100644 index 698a24c24ef..00000000000 --- a/src/current/_includes/v22.1/known-limitations/drop-unique-index-from-create-table.md +++ /dev/null @@ -1 +0,0 @@ -[`UNIQUE` indexes](create-index.html) created as part of a [`CREATE TABLE`](create-table.html) statement cannot be removed without using [`CASCADE`]({% unless page.name == "drop-index.md" %}drop-index.html{% endunless %}#remove-an-index-and-dependent-objects-with-cascade). Unique indexes created with [`CREATE INDEX`](create-index.html) do not have this limitation. diff --git a/src/current/_includes/v22.1/known-limitations/dropping-renaming-during-upgrade.md b/src/current/_includes/v22.1/known-limitations/dropping-renaming-during-upgrade.md deleted file mode 100644 index 38f7f9ddd87..00000000000 --- a/src/current/_includes/v22.1/known-limitations/dropping-renaming-during-upgrade.md +++ /dev/null @@ -1,10 +0,0 @@ -When upgrading from v20.1.x to v20.2.0, as soon as any node of the cluster has run v20.2.0, it is important to avoid dropping, renaming, or truncating tables, views, sequences, or databases on the v20.1 nodes. This is true even in cases where nodes were upgraded to v20.2.0 and then rolled back to v20.1. - -In this case, avoid running the following operations against v20.1 nodes: - -- [`DROP TABLE`](drop-table.html), [`TRUNCATE TABLE`](truncate.html), [`RENAME TABLE`](rename-table.html) -- [`DROP VIEW`](drop-view.html) -- [`DROP SEQUENCE`](drop-sequence.html), [`RENAME SEQUENCE`](rename-sequence.html) -- [`DROP DATABASE`](drop-database.html), [`RENAME DATABASE`](rename-database.html) - -Running any of these operations against v19.2 nodes will result in inconsistency between two internal tables, `system.namespace` and `system.namespace2`. This inconsistency will prevent you from being able to recreate the dropped or renamed objects; the returned error will be `ERROR: relation already exists`. In the case of a dropped or renamed database, [`SHOW DATABASES`](show-databases.html) will also return an error: `ERROR: internal error: "" is not a database`. diff --git a/src/current/_includes/v22.1/known-limitations/import-high-disk-contention.md b/src/current/_includes/v22.1/known-limitations/import-high-disk-contention.md deleted file mode 100644 index 0e016ecaac5..00000000000 --- a/src/current/_includes/v22.1/known-limitations/import-high-disk-contention.md +++ /dev/null @@ -1,6 +0,0 @@ -[`IMPORT`](import.html) can sometimes fail with a "context canceled" error, or can restart itself many times without ever finishing. If this is happening, it is likely due to a high amount of disk contention. This can be mitigated by setting the `kv.bulk_io_write.max_rate` [cluster setting](cluster-settings.html) to a value below your max disk write speed. For example, to set it to 10MB/s, execute: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING kv.bulk_io_write.max_rate = '10MB'; -~~~ diff --git a/src/current/_includes/v22.1/known-limitations/old-multi-col-stats.md b/src/current/_includes/v22.1/known-limitations/old-multi-col-stats.md deleted file mode 100644 index 595be9c7209..00000000000 --- a/src/current/_includes/v22.1/known-limitations/old-multi-col-stats.md +++ /dev/null @@ -1,3 +0,0 @@ -When a column is dropped from a multi-column index, the {% if page.name == "cost-based-optimizer.md" %} optimizer {% else %} [optimizer](cost-based-optimizer.html) {% endif %} will not collect new statistics for the deleted column. However, the optimizer never deletes the old [multi-column statistics](create-statistics.html#create-statistics-on-multiple-columns). This can cause a buildup of statistics in `system.table_statistics` leading the optimizer to use stale statistics, which could result in sub-optimal plans. To workaround this issue and avoid these scenarios, explicitly [delete those statistics](create-statistics.html#delete-statistics) from the `system.table_statistics` table. - - [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/67407) diff --git a/src/current/_includes/v22.1/known-limitations/partitioning-with-placeholders.md b/src/current/_includes/v22.1/known-limitations/partitioning-with-placeholders.md deleted file mode 100644 index b3c3345200d..00000000000 --- a/src/current/_includes/v22.1/known-limitations/partitioning-with-placeholders.md +++ /dev/null @@ -1 +0,0 @@ -When defining a [table partition](partitioning.html), either during table creation or table alteration, it is not possible to use placeholders in the `PARTITION BY` clause. diff --git a/src/current/_includes/v22.1/known-limitations/restore-multiregion-match.md b/src/current/_includes/v22.1/known-limitations/restore-multiregion-match.md deleted file mode 100644 index 6d0f6c989fc..00000000000 --- a/src/current/_includes/v22.1/known-limitations/restore-multiregion-match.md +++ /dev/null @@ -1,48 +0,0 @@ -[`REGIONAL BY TABLE`](multiregion-overview.html#regional-tables) and [`REGIONAL BY ROW`](multiregion-overview.html#regional-by-row-tables) tables can be restored **only** if the regions of the backed-up table match those of the target database. All of the following must be true for `RESTORE` to be successful: - - * The [regions](multiregion-overview.html#database-regions) of the source database and the regions of the destination database have the same set of regions. - * The regions were added to each of the databases in the same order. - * The databases have the same [primary region](set-primary-region.html). - - The following example would be considered as having **mismatched** regions because the database regions were not added in the same order and the primary regions do not match. - - Running on the source database: - - ~~~ sql - ALTER DATABASE source_database SET PRIMARY REGION "us-east1"; - ~~~ - ~~~ sql - ALTER DATABASE source_database ADD region "us-west1"; - ~~~ - - Running on the destination database: - - ~~~ sql - ALTER DATABASE destination_database SET PRIMARY REGION "us-west1"; - ~~~ - ~~~ sql - ALTER DATABASE destination_database ADD region "us-east1"; - ~~~ - - In addition, the following scenario has mismatched regions between the databases since the regions were not added to the database in the same order. - - Running on the source database: - - ~~~ sql - ALTER DATABASE source_database SET PRIMARY REGION "us-east1"; - ~~~ - ~~~ sql - ALTER DATABASE source_database ADD region "us-west1"; - ~~~ - - Running on the destination database: - - ~~~ sql - ALTER DATABASE destination_database SET PRIMARY REGION "us-west1"; - ~~~ - ~~~ sql - ALTER DATABASE destination_database ADD region "us-east1"; - ~~~ - ~~~ sql - ALTER DATABASE destination_database SET PRIMARY REGION "us-east1"; - ~~~ diff --git a/src/current/_includes/v22.1/known-limitations/restore-tables-non-multi-reg.md b/src/current/_includes/v22.1/known-limitations/restore-tables-non-multi-reg.md deleted file mode 100644 index 45ce8db1924..00000000000 --- a/src/current/_includes/v22.1/known-limitations/restore-tables-non-multi-reg.md +++ /dev/null @@ -1 +0,0 @@ -Restoring [`GLOBAL`](multiregion-overview.html#global-tables) and [`REGIONAL BY TABLE`](multiregion-overview.html#regional-tables) tables into a **non**-multi-region database is not supported. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/71502) diff --git a/src/current/_includes/v22.1/known-limitations/row-level-ttl-limitations.md b/src/current/_includes/v22.1/known-limitations/row-level-ttl-limitations.md deleted file mode 100644 index fd4db41985f..00000000000 --- a/src/current/_includes/v22.1/known-limitations/row-level-ttl-limitations.md +++ /dev/null @@ -1,10 +0,0 @@ -- You cannot use [foreign keys](foreign-key.html) to create references to or from a table that uses Row-Level TTL. [cockroachdb/cockroach#76407](https://github.com/cockroachdb/cockroach/issues/76407) -- Any queries you run against tables with Row-Level TTL enabled do not filter out expired rows from the result set (this includes [`UPDATE`s](update.html) and [`DELETE`s](delete.html)). This feature may be added in a future release. For now, follow the instructions in [Filter out expired rows from a selection query](row-level-ttl.html#filter-out-expired-rows-from-a-selection-query). -- The TTL cannot be customized based on the values of other columns in the row. [cockroachdb/cockroach#76916](https://github.com/cockroachdb/cockroach/issues/76916) - - Because of the above limitation, adding TTL to large existing tables [can negatively affect performance](row-level-ttl.html#ttl-existing-table-performance-note), since a new column must be created and backfilled for every row. Creating a new table with a TTL is not affected by this limitation. -- The queries executed by Row-Level TTL are not yet optimized for performance: - - They do not use any indexes that may be available on the [`crdb_internal_expiration` column](row-level-ttl.html#crdb-internal-expiration). - - They do not take into account [node localities](cockroach-start.html#locality). - - All deletes are run on a single node, instead of being distributed. - - For details, see [cockroachdb/cockroach#76914](https://github.com/cockroachdb/cockroach/issues/76914) -- If you [override the TTL for a row by setting `crdb_internal_expiration` directly](row-level-ttl.html#set-the-row-level-ttl-for-an-individual-row), and the row is later updated (e.g., using an [`ON UPDATE` expression](create-table.html#on-update-expressions)), the TTL override is lost; it is reset to `now() + ttl_expire_after`. diff --git a/src/current/_includes/v22.1/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md b/src/current/_includes/v22.1/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md deleted file mode 100644 index 0c8be84fd54..00000000000 --- a/src/current/_includes/v22.1/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md +++ /dev/null @@ -1,60 +0,0 @@ -Schema change [DDL](https://en.wikipedia.org/wiki/Data_definition_language#ALTER_statement) statements that run inside a multi-statement transaction with non-DDL statements can fail at [`COMMIT`](commit-transaction.html) time, even if other statements in the transaction succeed. This leaves such transactions in a "partially committed, partially aborted" state that may require manual intervention to determine whether the DDL statements succeeded. - -If such a failure occurs, CockroachDB will emit a CockroachDB-specific error code, `XXA00`, and the following error message: - -``` -transaction committed but schema change aborted with error: -HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed. -Manual inspection may be required to determine the actual state of the database. -``` - -{{site.data.alerts.callout_danger}} -If you must execute schema change DDL statements inside a multi-statement transaction, we **strongly recommend** checking for this error code and handling it appropriately every time you execute such transactions. -{{site.data.alerts.end}} - -This error will occur in various scenarios, including but not limited to: - -- Creating a unique index fails because values aren't unique. -- The evaluation of a computed value fails. -- Adding a constraint (or a column with a constraint) fails because the constraint is violated for the default/computed values in the column. - -To see an example of this error, start by creating the following table. - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE T(x INT); -INSERT INTO T(x) VALUES (1), (2), (3); -~~~ - -Then, enter the following multi-statement transaction, which will trigger the error. - -{% include_cached copy-clipboard.html %} -~~~ sql -BEGIN; -ALTER TABLE t ADD CONSTRAINT unique_x UNIQUE(x); -INSERT INTO T(x) VALUES (3); -COMMIT; -~~~ - -~~~ -pq: transaction committed but schema change aborted with error: (23505): duplicate key value (x)=(3) violates unique constraint "unique_x" -HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed. -Manual inspection may be required to determine the actual state of the database. -~~~ - -In this example, the [`INSERT`](insert.html) statement committed, but the [`ALTER TABLE`](alter-table.html) statement adding a [`UNIQUE` constraint](unique.html) failed. We can verify this by looking at the data in table `t` and seeing that the additional non-unique value `3` was successfully inserted. - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT * FROM t; -~~~ - -~~~ - x -+---+ - 1 - 2 - 3 - 3 -(4 rows) -~~~ diff --git a/src/current/_includes/v22.1/known-limitations/schema-changes-between-prepared-statements.md b/src/current/_includes/v22.1/known-limitations/schema-changes-between-prepared-statements.md deleted file mode 100644 index 736fe99df61..00000000000 --- a/src/current/_includes/v22.1/known-limitations/schema-changes-between-prepared-statements.md +++ /dev/null @@ -1,33 +0,0 @@ -When the schema of a table targeted by a prepared statement changes after the prepared statement is created, future executions of the prepared statement could result in an error. For example, adding a column to a table referenced in a prepared statement with a `SELECT *` clause will result in an error: - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE users (id INT PRIMARY KEY); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -PREPARE prep1 AS SELECT * FROM users; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER TABLE users ADD COLUMN name STRING; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -INSERT INTO users VALUES (1, 'Max Roach'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -EXECUTE prep1; -~~~ - -~~~ -ERROR: cached plan must not change result type -SQLSTATE: 0A000 -~~~ - -It's therefore recommended to explicitly list result columns instead of using `SELECT *` in prepared statements, when possible. diff --git a/src/current/_includes/v22.1/known-limitations/schema-changes-within-transactions.md b/src/current/_includes/v22.1/known-limitations/schema-changes-within-transactions.md deleted file mode 100644 index b0a62d43e34..00000000000 --- a/src/current/_includes/v22.1/known-limitations/schema-changes-within-transactions.md +++ /dev/null @@ -1,9 +0,0 @@ -Within a single [transaction](transactions.html): - -- You can run schema changes inside the same transaction as a [`CREATE TABLE`](create-table.html) statement. For more information, see [Run schema changes inside a transaction with `CREATE TABLE`](online-schema-changes.html#run-schema-changes-inside-a-transaction-with-create-table). However, a `CREATE TABLE` statement containing [`FOREIGN KEY`](foreign-key.html) clauses cannot be followed by statements that reference the new table. -- [Schema change DDL statements inside a multi-statement transaction can fail while other statements succeed](#schema-change-ddl-statements-inside-a-multi-statement-transaction-can-fail-while-other-statements-succeed). -- [`DROP COLUMN`](drop-column.html) can result in data loss if one of the other schema changes in the transaction fails or is canceled. To work around this, move the `DROP COLUMN` statement to its own explicit transaction or run it in a single statement outside the existing transaction. - -{{site.data.alerts.callout_info}} -If a schema change within a transaction fails, manual intervention may be needed to determine which statement has failed. After determining which schema change(s) failed, you can then retry the schema change. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/known-limitations/set-transaction-no-rollback.md b/src/current/_includes/v22.1/known-limitations/set-transaction-no-rollback.md deleted file mode 100644 index 4ab3661f4f7..00000000000 --- a/src/current/_includes/v22.1/known-limitations/set-transaction-no-rollback.md +++ /dev/null @@ -1,17 +0,0 @@ -{% if page.name == "set-vars.md" %} `SET` {% else %} [`SET`](set-vars.html) {% endif %} does not properly apply [`ROLLBACK`](rollback-transaction.html) within a transaction. For example, in the following transaction, showing the `TIME ZONE` [variable](set-vars.html#supported-variables) does not return `2` as expected after the rollback: - -~~~sql -SET TIME ZONE +2; -BEGIN; -SET TIME ZONE +3; -ROLLBACK; -SHOW TIME ZONE; -~~~ - -~~~sql -timezone ------------- -3 -~~~ - -[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/69396) diff --git a/src/current/_includes/v22.1/known-limitations/show-backup-locality-incremental-location.md b/src/current/_includes/v22.1/known-limitations/show-backup-locality-incremental-location.md deleted file mode 100644 index c19aa8d10b4..00000000000 --- a/src/current/_includes/v22.1/known-limitations/show-backup-locality-incremental-location.md +++ /dev/null @@ -1 +0,0 @@ -{% if page.name == "show-backup.md" %}`SHOW BACKUP`{% else %}[`SHOW BACKUP`](show-backup.html){% endif %} can display backups taken with the `incremental_location` option **or** for [locality-aware backups](take-and-restore-locality-aware-backups.html). It will not display backups for locality-aware backups taken with the `incremental_location` option. [Tracking GitHub issue](https://github.com/cockroachdb/cockroach/issues/82912). \ No newline at end of file diff --git a/src/current/_includes/v22.1/known-limitations/single-col-stats-deletion.md b/src/current/_includes/v22.1/known-limitations/single-col-stats-deletion.md deleted file mode 100644 index b8baa46c5d2..00000000000 --- a/src/current/_includes/v22.1/known-limitations/single-col-stats-deletion.md +++ /dev/null @@ -1,3 +0,0 @@ -[Single-column statistics](create-statistics.html#create-statistics-on-a-single-column) are not deleted when columns are dropped, which could cause minor performance issues. - - [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/67407) diff --git a/src/current/_includes/v22.1/known-limitations/sql-cursors.md b/src/current/_includes/v22.1/known-limitations/sql-cursors.md deleted file mode 100644 index e204de9f74a..00000000000 --- a/src/current/_includes/v22.1/known-limitations/sql-cursors.md +++ /dev/null @@ -1,25 +0,0 @@ -CockroachDB implements SQL {% if page.name == "known-limitations.md" %} [cursor](cursors.html) {% else %} cursor {% endif %} support with the following limitations: - -- `DECLARE` only supports forward cursors. Reverse cursors created with `DECLARE SCROLL` are not supported. [cockroachdb/cockroach#77102](https://github.com/cockroachdb/cockroach/issues/77102) -- `FETCH` supports forward, relative, and absolute variants, but only for forward cursors. [cockroachdb/cockroach#77102](https://github.com/cockroachdb/cockroach/issues/77102) -- `BINARY CURSOR`, which returns data in the Postgres binary format, is not supported. [cockroachdb/cockroach#77099](https://github.com/cockroachdb/cockroach/issues/77099) -- `MOVE`, which allows advancing the cursor without returning any rows, is not supported. [cockroachdb/cockroach#77100](https://github.com/cockroachdb/cockroach/issues/77100) - - `WITH HOLD`, which allows keeping a cursor open for longer than a transaction by writing its results into a buffer, is accepted as valid syntax within a single transaction but is not supported. It acts as a no-op and does not actually perform the function of `WITH HOLD`, which is to make the cursor live outside its parent transaction. Instead, if you are using `WITH HOLD`, you will be forced to close that cursor within the transaction it was created in. [cockroachdb/cockroach#77101](https://github.com/cockroachdb/cockroach/issues/77101) - - This syntax is accepted (but does not have any effect): - {% include_cached copy-clipboard.html %} - ~~~ sql - BEGIN; - DECLARE test_cur CURSOR WITH HOLD FOR SELECT * FROM foo ORDER BY bar; - CLOSE test_cur; - COMMIT; - ~~~ - - This syntax is not accepted, and will result in an error: - {% include_cached copy-clipboard.html %} - ~~~ sql - BEGIN; - DECLARE test_cur CURSOR WITH HOLD FOR SELECT * FROM foo ORDER BY bar; - COMMIT; -- This will fail with an error because CLOSE test_cur was not called inside the transaction. - ~~~ -- Scrollable cursor (also known as reverse `FETCH`) is not supported. -- [`SELECT ... FOR UPDATE`](select-for-update.html) with a cursor is not supported. [cockroachdb/cockroach#77103](https://github.com/cockroachdb/cockroach/issues/77103) -- Respect for [`SAVEPOINT`s](savepoint.html) is not supported. Cursor definitions do not disappear properly if rolled back to a `SAVEPOINT` from before they were created. [cockroachdb/cockroach#77104](https://github.com/cockroachdb/cockroach/issues/77104) diff --git a/src/current/_includes/v22.1/known-limitations/stats-refresh-upgrade.md b/src/current/_includes/v22.1/known-limitations/stats-refresh-upgrade.md deleted file mode 100644 index f54a08b3754..00000000000 --- a/src/current/_includes/v22.1/known-limitations/stats-refresh-upgrade.md +++ /dev/null @@ -1,3 +0,0 @@ -The [automatic statistics refresher](cost-based-optimizer.html#control-statistics-refresh-rate) automatically checks whether it needs to refresh statistics for every table in the database upon startup of each node in the cluster. If statistics for a table have not been refreshed in a while, this will trigger collection of statistics for that table. If statistics have been refreshed recently, it will not force a refresh. As a result, the automatic statistics refresher does not necessarily perform a refresh of statistics after an [upgrade](upgrade-cockroach-version.html). This could cause a problem, for example, if the upgrade moves from a version without [histograms](cost-based-optimizer.html#control-histogram-collection) to a version with histograms. To refresh statistics manually, use [`CREATE STATISTICS`](create-statistics.html). - - [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/54816) diff --git a/src/current/_includes/v22.1/known-limitations/userfile-upload-non-recursive.md b/src/current/_includes/v22.1/known-limitations/userfile-upload-non-recursive.md deleted file mode 100644 index 19db5fde6a4..00000000000 --- a/src/current/_includes/v22.1/known-limitations/userfile-upload-non-recursive.md +++ /dev/null @@ -1 +0,0 @@ -- `cockroach userfile upload` does not not currently allow for recursive uploads from a directory. This feature will be present with the `--recursive` flag in future versions. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/pull/65307) diff --git a/src/current/_includes/v22.1/metric-names-serverless.md b/src/current/_includes/v22.1/metric-names-serverless.md deleted file mode 100644 index d26d1e892c3..00000000000 --- a/src/current/_includes/v22.1/metric-names-serverless.md +++ /dev/null @@ -1,235 +0,0 @@ -Name | Description ------|----- -`addsstable.applications` | Number of SSTable ingestions applied (i.e. applied by Replicas) -`addsstable.copies` | number of SSTable ingestions that required copying files during application -`addsstable.proposals` | Number of SSTable ingestions proposed (i.e. sent to Raft by lease holders) -`admission.wait_sum.kv-stores` | Total wait time in micros -`admission.wait_sum.kv` | Total wait time in micros -`admission.wait_sum.sql-kv-response` | Total wait time in micros -`admission.wait_sum.sql-sql-response` | Total wait time in micros -`capacity.available` | Available storage capacity -`capacity.reserved` | Capacity reserved for snapshots -`capacity.used` | Used storage capacity -`capacity` | Total storage capacity -`changefeed.emitted_messages` | Messages emitted by all feeds -`changefeed.error_retries` | Total retryable errors encountered by all changefeeds -`changefeed.failures` | Total number of changefeed jobs which have failed -`changefeed.max_behind_nanos` | Largest commit-to-emit duration of any running feed -`changefeed.running` | Number of currently running changefeeds, including sinkless -`clock-offset.meannanos` | Mean clock offset with other nodes -`clock-offset.stddevnanos` | Stddev clock offset with other nodes -`distsender.batches.partial` | Number of partial batches processed after being divided on range boundaries -`distsender.batches` | Number of batches processed -`distsender.errors.notleaseholder` | Number of NotLeaseHolderErrors encountered from replica-addressed RPCs -`distsender.rpc.sent.local` | Number of replica-addressed RPCs sent through the local-server optimization -`distsender.rpc.sent.nextreplicaerror` | Number of replica-addressed RPCs sent due to per-replica errors -`distsender.rpc.sent` | Number of replica-addressed RPCs sent -`exec.error` | Number of batch KV requests that failed to execute on this node. This count excludes transaction restart/abort errors. However, it will include other errors expected during normal operation, such as ConditionFailedError. This metric is thus not an indicator of KV health. -`exec.latency` | Latency of batch KV requests (including errors) executed on this node. This measures requests already addressed to a single replica, from the moment at which they arrive at the internal gRPC endpoint to the moment at which the response (or an error) is returned. This latency includes in particular commit waits, conflict resolution and replication, and end-users can easily produce high measurements via long-running transactions that conflict with foreground traffic. This metric thus does not provide a good signal for understanding the health of the KV layer. -`exec.success` | Number of batch KV requests executed successfully on this node. A request is considered to have executed 'successfully' if it either returns a result or a transaction restart/abort error. -`gcbytesage` | Cumulative age of non-live data -`gossip.bytes.received` | Number of received gossip bytes -`gossip.bytes.sent` | Number of sent gossip bytes -`gossip.connections.incoming` | Number of active incoming gossip connections -`gossip.connections.outgoing` | Number of active outgoing gossip connections -`gossip.connections.refused` | Number of refused incoming gossip connections -`gossip.infos.received` | Number of received gossip Info objects -`gossip.infos.sent` | Number of sent gossip Info objects -`intentage` | Cumulative age of intents -`intentbytes` | Number of bytes in intent KV pairs -`intentcount` | Count of intent keys -`jobs.changefeed.resume_retry_error` | Number of changefeed jobs which failed with a retriable error -`keybytes` | Number of bytes taken up by keys -`keycount` | Count of all keys -`leases.epoch` | Number of replica leaseholders using epoch-based leases -`leases.error` | Number of failed lease requests -`leases.expiration` | Number of replica leaseholders using expiration-based leases -`leases.success` | Number of successful lease requests -`leases.transfers.error` | Number of failed lease transfers -`leases.transfers.success` | Number of successful lease transfers -`livebytes` | Number of bytes of live data (keys plus values) -`livecount` | Count of live keys -`liveness.epochincrements` | Number of times this node has incremented its liveness epoch -`liveness.heartbeatfailures` | Number of failed node liveness heartbeats from this node -`liveness.heartbeatlatency` | Node liveness heartbeat latency -`liveness.heartbeatsuccesses` | Number of successful node liveness heartbeats from this node -`liveness.livenodes` | Number of live nodes in the cluster (will be 0 if this node is not itself live) -`queue.consistency.pending` | Number of pending replicas in the consistency checker queue -`queue.consistency.process.failure` | Number of replicas which failed processing in the consistency checker queue -`queue.consistency.process.success` | Number of replicas successfully processed by the consistency checker queue -`queue.consistency.processingnanos` | Nanoseconds spent processing replicas in the consistency checker queue -`queue.gc.info.abortspanconsidered` | Number of AbortSpan entries old enough to be considered for removal -`queue.gc.info.abortspangcnum` | Number of AbortSpan entries fit for removal -`queue.gc.info.abortspanscanned` | Number of transactions present in the AbortSpan scanned from the engine -`queue.gc.info.intentsconsidered` | Number of 'old' intents -`queue.gc.info.intenttxns` | Number of associated distinct transactions -`queue.gc.info.numkeysaffected` | Number of keys with GC'able data -`queue.gc.info.pushtxn` | Number of attempted pushes -`queue.gc.info.resolvesuccess` | Number of successful intent resolutions -`queue.gc.info.resolvetotal` | Number of attempted intent resolutions -`queue.gc.info.transactionspangcaborted` | Number of GC'able entries corresponding to aborted txns -`queue.gc.info.transactionspangccommitted` | Number of GC'able entries corresponding to committed txns -`queue.gc.info.transactionspangcpending` | Number of GC'able entries corresponding to pending txns -`queue.gc.info.transactionspanscanned` | Number of entries in transaction spans scanned from the engine -`queue.gc.pending` | Number of pending replicas in the MVCC GC queue -`queue.gc.process.failure` | Number of replicas which failed processing in the MVCC GC queue -`queue.gc.process.success` | Number of replicas successfully processed by the MVCC GC queue -`queue.gc.processingnanos` | Nanoseconds spent processing replicas in the MVCC GC queue -`queue.raftlog.pending` | Number of pending replicas in the Raft log queue -`queue.raftlog.process.failure` | Number of replicas which failed processing in the Raft log queue -`queue.raftlog.process.success` | Number of replicas successfully processed by the Raft log queue -`queue.raftlog.processingnanos` | Nanoseconds spent processing replicas in the Raft log queue -`queue.raftsnapshot.pending` | Number of pending replicas in the Raft repair queue -`queue.raftsnapshot.process.failure` | Number of replicas which failed processing in the Raft repair queue -`queue.raftsnapshot.process.success` | Number of replicas successfully processed by the Raft repair queue -`queue.raftsnapshot.processingnanos` | Nanoseconds spent processing replicas in the Raft repair queue -`queue.replicagc.pending` | Number of pending replicas in the replica GC queue -`queue.replicagc.process.failure` | Number of replicas which failed processing in the replica GC queue -`queue.replicagc.process.success` | Number of replicas successfully processed by the replica GC queue -`queue.replicagc.processingnanos` | Nanoseconds spent processing replicas in the replica GC queue -`queue.replicagc.removereplica` | Number of replica removals attempted by the replica GC queue -`queue.replicate.addreplica` | Number of replica additions attempted by the replicate queue -`queue.replicate.pending` | Number of pending replicas in the replicate queue -`queue.replicate.process.failure` | Number of replicas which failed processing in the replicate queue -`queue.replicate.process.success` | Number of replicas successfully processed by the replicate queue -`queue.replicate.processingnanos` | Nanoseconds spent processing replicas in the replicate queue -`queue.replicate.purgatory` | Number of replicas in the replicate queue's purgatory, awaiting allocation options -`queue.replicate.rebalancereplica` | Number of replica rebalancer-initiated additions attempted by the replicate queue -`queue.replicate.removedeadreplica` | Number of dead replica removals attempted by the replicate queue (typically in response to a node outage) -`queue.replicate.removereplica` | Number of replica removals attempted by the replicate queue (typically in response to a rebalancer-initiated addition) -`queue.replicate.transferlease` | Number of range lease transfers attempted by the replicate queue -`queue.split.pending` | Number of pending replicas in the split queue -`queue.split.process.failure` | Number of replicas which failed processing in the split queue -`queue.split.process.success` | Number of replicas successfully processed by the split queue -`queue.split.processingnanos` | Nanoseconds spent processing replicas in the split queue -`queue.tsmaintenance.pending` | Number of pending replicas in the time series maintenance queue -`queue.tsmaintenance.process.failure` | Number of replicas which failed processing in the time series maintenance queue -`queue.tsmaintenance.process.success` | Number of replicas successfully processed by the time series maintenance queue -`queue.tsmaintenance.processingnanos` | Nanoseconds spent processing replicas in the time series maintenance queue -`raft.commandsapplied` | Count of Raft commands applied. This measurement is taken on the Raft apply loops of all Replicas (leaders and followers alike), meaning that it does not measure the number of Raft commands *proposed* (in the hypothetical extreme case, all Replicas may apply all commands through snapshots, thus not increasing this metric at all). Instead, it is a proxy for how much work is being done advancing the Replica state machines on this node. -`raft.enqueued.pending` | Number of pending outgoing messages in the Raft Transport queue. The queue is bounded in size, so instead of unbounded growth one would observe a ceiling value in the tens of thousands. -`raft.heartbeats.pending` | Number of pending heartbeats and responses waiting to be coalesced -`raft.process.commandcommit.latency` | Latency histogram for applying a batch of Raft commands to the state machine. This metric is misnamed: it measures the latency for *applying* a batch of committed Raft commands to a Replica state machine. This requires only non-durable I/O (except for replication configuration changes). Note that a "batch" in this context is really a sub-batch of the batch received for application during raft ready handling. The 'raft.process.applycommitted.latency' histogram is likely more suitable in most cases, as it measures the total latency across all sub-batches (i.e. the sum of commandcommit.latency for a complete batch). -`raft.process.logcommit.latency` | Latency histogram for committing Raft log entries to stable storage. This measures the latency of durably committing a group of newly received Raft entries as well as the HardState entry to disk. This excludes any data processing, i.e. we measure purely the commit latency of the resulting Engine write. Homogeneous bands of p50-p99 latencies (in the presence of regular Raft traffic), make it likely that the storage layer is healthy. Spikes in the latency bands can either hint at the presence of large sets of Raft entries being received, or at performance issues at the storage layer. -`raft.process.tickingnanos` | Nanoseconds spent in store.processRaft() processing replica.Tick() -`raft.process.workingnanos` | Nanoseconds spent in store.processRaft() working. This is the sum of the measurements passed to the raft.process.handleready.latency histogram. -`raft.rcvd.app` | Number of MsgApp messages received by this store -`raft.rcvd.appresp` | Number of MsgAppResp messages received by this store -`raft.rcvd.dropped` | Number of dropped incoming Raft messages -`raft.rcvd.heartbeat` | Number of (coalesced, if enabled) MsgHeartbeat messages received by this store -`raft.rcvd.heartbeatresp` | Number of (coalesced, if enabled) MsgHeartbeatResp messages received by this store -`raft.rcvd.prevote` | Number of MsgPreVote messages received by this store -`raft.rcvd.prevoteresp` | Number of MsgPreVoteResp messages received by this store -`raft.rcvd.prop` | Number of MsgProp messages received by this store -`raft.rcvd.snap` | Number of MsgSnap messages received by this store -`raft.rcvd.timeoutnow` | Number of MsgTimeoutNow messages received by this store -`raft.rcvd.transferleader` | Number of MsgTransferLeader messages received by this store -`raft.rcvd.vote` | Number of MsgVote messages received by this store -`raft.rcvd.voteresp` | Number of MsgVoteResp messages received by this store -`raft.ticks` | Number of Raft ticks queued -`raftlog.behind` | Number of Raft log entries followers on other stores are behind. This gauge provides a view of the aggregate number of log entries the Raft leaders on this node think the followers are behind. Since a raft leader may not always have a good estimate for this information for all of its followers, and since followers are expected to be behind (when they are not required as part of a quorum) *and* the aggregate thus scales like the count of such followers, it is difficult to meaningfully interpret this metric. -`raftlog.truncated` | Number of Raft log entries truncated -`range.adds` | Number of range additions -`range.raftleadertransfers` | Number of raft leader transfers -`range.removes` | Number of range removals -`range.snapshots.generated` | Number of generated snapshots -`range.splits` | Number of range splits -`ranges.overreplicated` | Number of ranges with more live replicas than the replication target -`ranges.unavailable` | Number of ranges with fewer live replicas than needed for quorum -`ranges.underreplicated` | Number of ranges with fewer live replicas than the replication target -`ranges` | Number of ranges -`rebalancing.writespersecond` | Number of keys written (i.e. applied by raft) per second to the store, averaged over a large time period as used in rebalancing decisions -`replicas.leaders_not_leaseholders` | Number of replicas that are Raft leaders whose range lease is held by another store -`replicas.leaders` | Number of raft leaders -`replicas.leaseholders` | Number of lease holders -`replicas.quiescent` | Number of quiesced replicas -`replicas.reserved` | Number of replicas reserved for snapshots -`replicas` | Number of replicas -`requests.backpressure.split` | Number of backpressured writes waiting on a Range split. A Range will backpressure (roughly) non-system traffic when the range is above the configured size until the range splits. When the rate of this metric is nonzero over extended periods of time, it should be investigated why splits are not occurring. -`requests.slow.distsender` | Number of replica-bound RPCs currently stuck or retrying for a long time. Note that this is not a good signal for KV health. The remote side of the RPCs tracked here may experience contention, so an end user can easily cause values for this metric to be emitted by leaving a transaction open for a long time and contending with it using a second transaction. -`requests.slow.lease` | Number of requests that have been stuck for a long time acquiring a lease. This gauge registering a nonzero value usually indicates range or replica unavailability, and should be investigated. In the common case, we also expect to see 'requests.slow.raft' to register a nonzero value, indicating that the lease requests are not getting a timely response from the replication layer. -`requests.slow.raft` | Number of requests that have been stuck for a long time in the replication layer. An (evaluated) request has to pass through the replication layer, notably the quota pool and raft. If it fails to do so within a highly permissive duration, the gauge is incremented (and decremented again once the request is either applied or returns an error). A nonzero value indicates range or replica unavailability, and should be investigated. -`rocksdb.block.cache.hits` | Count of block cache hits -`rocksdb.block.cache.misses` | Count of block cache misses -`rocksdb.block.cache.pinned-usage` | Bytes pinned by the block cache -`rocksdb.block.cache.usage` | Bytes used by the block cache -`rocksdb.bloom.filter.prefix.checked` | Number of times the bloom filter was checked -`rocksdb.bloom.filter.prefix.useful` | Number of times the bloom filter helped avoid iterator creation -`rocksdb.compactions` | Number of table compactions -`rocksdb.flushes` | Number of table flushes -`rocksdb.memtable.total-size` | Current size of memtable in bytes -`rocksdb.num-sstables` | Number of storage engine SSTables -`rocksdb.read-amplification` | Number of disk reads per query -`rocksdb.table-readers-mem-estimate` | Memory used by index and filter blocks -`round-trip-latency` | Distribution of round-trip latencies with other nodes -`sql.bytesin` | Number of sql bytes received -`sql.bytesout` | Number of sql bytes sent -`sql.conn.latency` | Latency to establish and authenticate a SQL connection -`sql.conns` | Number of active sql connections -`sql.ddl.count` | Number of SQL DDL statements successfully executed -`sql.delete.count` | Number of SQL DELETE statements successfully executed -`sql.distsql.contended_queries.count` | Number of SQL queries that experienced contention -`sql.distsql.exec.latency` | Latency of DistSQL statement execution -`sql.distsql.flows.active` | Number of distributed SQL flows currently active -`sql.distsql.flows.total` | Number of distributed SQL flows executed -`sql.distsql.queries.active` | Number of SQL queries currently active -`sql.distsql.queries.total` | Number of SQL queries executed -`sql.distsql.select.count` | Number of DistSQL SELECT statements -`sql.distsql.service.latency` | Latency of DistSQL request execution -`sql.exec.latency` | Latency of SQL statement execution -`sql.failure.count` | Number of statements resulting in a planning or runtime error -`sql.full.scan.count` | Number of full table or index scans -`sql.insert.count` | Number of SQL INSERT statements successfully executed -`sql.mem.distsql.current` | Current sql statement memory usage for distsql -`sql.mem.distsql.max` | Memory usage per sql statement for distsql -`sql.mem.internal.session.current` | Current sql session memory usage for internal -`sql.mem.internal.session.max` | Memory usage per sql session for internal -`sql.mem.internal.txn.current` | Current sql transaction memory usage for internal -`sql.mem.internal.txn.max` | Memory usage per sql transaction for internal -`sql.misc.count` | Number of other SQL statements successfully executed -`sql.query.count` | Number of SQL queries executed -`sql.select.count` | Number of SQL SELECT statements successfully executed -`sql.service.latency` | Latency of SQL request execution -`sql.statements.active` | Number of currently active user SQL statements -`sql.txn.abort.count` | Number of SQL transaction abort errors -`sql.txn.begin.count` | Number of SQL transaction BEGIN statements successfully executed -`sql.txn.commit.count` | Number of SQL transaction COMMIT statements successfully executed -`sql.txn.latency` | Latency of SQL transactions -`sql.txn.rollback.count` | Number of SQL transaction ROLLBACK statements successfully executed -`sql.txns.open` | Number of currently open user SQL transactions -`sql.update.count` | Number of SQL UPDATE statements successfully executed -`sys.cgo.allocbytes` | Current bytes of memory allocated by cgo -`sys.cgo.totalbytes` | Total bytes of memory allocated by cgo, but not released -`sys.cgocalls` | Total number of cgo calls -`sys.cpu.combined.percent-normalized` | Current user+system cpu percentage, normalized 0-1 by number of cores -`sys.cpu.sys.ns` | Total system cpu time -`sys.cpu.sys.percent` | Current system cpu percentage -`sys.cpu.user.ns` | Total user cpu time -`sys.cpu.user.percent` | Current user cpu percentage -`sys.fd.open` | Process open file descriptors -`sys.fd.softlimit` | Process open FD soft limit -`sys.gc.count` | Total number of GC runs -`sys.gc.pause.ns` | Total GC pause -`sys.gc.pause.percent` | Current GC pause percentage -`sys.go.allocbytes` | Current bytes of memory allocated by go -`sys.go.totalbytes` | Total bytes of memory allocated by go, but not released -`sys.goroutines` | Current number of goroutines -`sys.host.net.recv.bytes` | Bytes received on all network interfaces since this process started -`sys.host.net.send.bytes` | Bytes sent on all network interfaces since this process started -`sys.rss` | Current process RSS -`sys.uptime` | Process uptime -`sysbytes` | Number of bytes in system KV pairs -`syscount` | Count of system KV pairs -`timeseries.write.bytes` | Total size in bytes of metric samples written to disk -`timeseries.write.errors` | Total errors encountered while attempting to write metrics to disk -`timeseries.write.samples` | Total number of metric samples written to disk -`totalbytes` | Total number of bytes taken up by keys and values including non-live data -`txn.aborts` | Number of aborted KV transactions -`txn.commits1PC` | Number of KV transaction one-phase commit attempts -`txn.commits` | Number of committed KV transactions (including 1PC) -`txn.durations` | KV transaction durations -`txn.restarts.serializable` | Number of restarts due to a forwarded commit timestamp and isolation=SERIALIZABLE -`txn.restarts.writetooold` | Number of restarts due to a concurrent writer committing first -`txn.restarts` | Number of restarted KV transactions -`valbytes` | Number of bytes taken up by values -`valcount` | Count of all values diff --git a/src/current/_includes/v22.1/metric-names.md b/src/current/_includes/v22.1/metric-names.md deleted file mode 100644 index 84074c0b373..00000000000 --- a/src/current/_includes/v22.1/metric-names.md +++ /dev/null @@ -1,256 +0,0 @@ -Name | Description ------|------------ -`addsstable.applications` | Number of SSTable ingestions applied (i.e., applied by Replicas) -`addsstable.copies` | Number of SSTable ingestions that required copying files during application -`addsstable.proposals` | Number of SSTable ingestions proposed (i.e., sent to Raft by lease holders) -`build.timestamp` | Build information -`capacity.available` | Available storage capacity -`capacity.reserved` | Capacity reserved for snapshots -`capacity.used` | Used storage capacity -`capacity` | Total storage capacity -`changefeed.failures` | Total number of changefeed jobs which have failed -`changefeed.running` | Number of currently running changefeeds, including sinkless -`clock-offset.meannanos` | Mean clock offset with other nodes in nanoseconds -`clock-offset.stddevnanos` | Std dev clock offset with other nodes in nanoseconds -`compactor.compactingnanos` | Number of nanoseconds spent compacting ranges -`compactor.compactions.failure` | Number of failed compaction requests sent to the storage engine -`compactor.compactions.success` | Number of successful compaction requests sent to the storage engine -`compactor.suggestionbytes.compacted` | Number of logical bytes compacted from suggested compactions -`compactor.suggestionbytes.queued` | Number of logical bytes in suggested compactions in the queue -`compactor.suggestionbytes.skipped` | Number of logical bytes in suggested compactions which were not compacted -`distsender.batches.partial` | Number of partial batches processed -`distsender.batches` | Number of batches processed -`distsender.errors.notleaseholder` | Number of NotLeaseHolderErrors encountered -`distsender.rpc.sent.local` | Number of local RPCs sent -`distsender.rpc.sent.nextreplicaerror` | Number of RPCs sent due to per-replica errors -`distsender.rpc.sent` | Number of RPCs sent -`exec.error` | Number of batch KV requests that failed to execute on this node -`exec.latency` | Latency in nanoseconds of batch KV requests executed on this node -`exec.success` | Number of batch KV requests executed successfully on this node -`gcbytesage` | Cumulative age of non-live data in seconds -`gossip.bytes.received` | Number of received gossip bytes -`gossip.bytes.sent` | Number of sent gossip bytes -`gossip.connections.incoming` | Number of active incoming gossip connections -`gossip.connections.outgoing` | Number of active outgoing gossip connections -`gossip.connections.refused` | Number of refused incoming gossip connections -`gossip.infos.received` | Number of received gossip Info objects -`gossip.infos.sent` | Number of sent gossip Info objects -`intentage` | Cumulative age of intents in seconds -`intentbytes` | Number of bytes in intent KV pairs -`intentcount` | Count of intent keys -`keybytes` | Number of bytes taken up by keys -`keycount` | Count of all keys -`lastupdatenanos` | Time in nanoseconds since Unix epoch at which bytes/keys/intents metrics were last updated -`leases.epoch` | Number of replica leaseholders using epoch-based leases -`leases.error` | Number of failed lease requests -`leases.expiration` | Number of replica leaseholders using expiration-based leases -`leases.success` | Number of successful lease requests -`leases.transfers.error` | Number of failed lease transfers -`leases.transfers.success` | Number of successful lease transfers -`livebytes` | Number of bytes of live data (keys plus values), including unreplicated data -`livecount` | Count of live keys -`liveness.epochincrements` | Number of times this node has incremented its liveness epoch -`liveness.heartbeatfailures` | Number of failed node liveness heartbeats from this node -`liveness.heartbeatlatency` | Node liveness heartbeat latency in nanoseconds -`liveness.heartbeatsuccesses` | Number of successful node liveness heartbeats from this node -`liveness.livenodes` | Number of live nodes in the cluster (will be 0 if this node is not itself live) -`node-id` | node ID with labels for advertised RPC and HTTP addresses -`queue.consistency.pending` | Number of pending replicas in the consistency checker queue -`queue.consistency.process.failure` | Number of replicas which failed processing in the consistency checker queue -`queue.consistency.process.success` | Number of replicas successfully processed by the consistency checker queue -`queue.consistency.processingnanos` | Nanoseconds spent processing replicas in the consistency checker queue -`queue.gc.info.abortspanconsidered` | Number of AbortSpan entries old enough to be considered for removal -`queue.gc.info.abortspangcnum` | Number of AbortSpan entries fit for removal -`queue.gc.info.abortspanscanned` | Number of transactions present in the AbortSpan scanned from the engine -`queue.gc.info.intentsconsidered` | Number of 'old' intents -`queue.gc.info.intenttxns` | Number of associated distinct transactions -`queue.gc.info.numkeysaffected` | Number of keys with GC'able data -`queue.gc.info.pushtxn` | Number of attempted pushes -`queue.gc.info.resolvesuccess` | Number of successful intent resolutions -`queue.gc.info.resolvetotal` | Number of attempted intent resolutions -`queue.gc.info.transactionspangcaborted` | Number of GC'able entries corresponding to aborted txns -`queue.gc.info.transactionspangccommitted` | Number of GC'able entries corresponding to committed txns -`queue.gc.info.transactionspangcpending` | Number of GC'able entries corresponding to pending txns -`queue.gc.info.transactionspanscanned` | Number of entries in transaction spans scanned from the engine -`queue.gc.pending` | Number of pending replicas in the GC queue -`queue.gc.process.failure` | Number of replicas which failed processing in the GC queue -`queue.gc.process.success` | Number of replicas successfully processed by the GC queue -`queue.gc.processingnanos` | Nanoseconds spent processing replicas in the GC queue -`queue.raftlog.pending` | Number of pending replicas in the Raft log queue -`queue.raftlog.process.failure` | Number of replicas which failed processing in the Raft log queue -`queue.raftlog.process.success` | Number of replicas successfully processed by the Raft log queue -`queue.raftlog.processingnanos` | Nanoseconds spent processing replicas in the Raft log queue -`queue.raftsnapshot.pending` | Number of pending replicas in the Raft repair queue -`queue.raftsnapshot.process.failure` | Number of replicas which failed processing in the Raft repair queue -`queue.raftsnapshot.process.success` | Number of replicas successfully processed by the Raft repair queue -`queue.raftsnapshot.processingnanos` | Nanoseconds spent processing replicas in the Raft repair queue -`queue.replicagc.pending` | Number of pending replicas in the replica GC queue -`queue.replicagc.process.failure` | Number of replicas which failed processing in the replica GC queue -`queue.replicagc.process.success` | Number of replicas successfully processed by the replica GC queue -`queue.replicagc.processingnanos` | Nanoseconds spent processing replicas in the replica GC queue -`queue.replicagc.removereplica` | Number of replica removals attempted by the replica gc queue -`queue.replicate.addreplica` | Number of replica additions attempted by the replicate queue -`queue.replicate.pending` | Number of pending replicas in the replicate queue -`queue.replicate.process.failure` | Number of replicas which failed processing in the replicate queue -`queue.replicate.process.success` | Number of replicas successfully processed by the replicate queue -`queue.replicate.processingnanos` | Nanoseconds spent processing replicas in the replicate queue -`queue.replicate.purgatory` | Number of replicas in the replicate queue's purgatory, awaiting allocation options -`queue.replicate.rebalancereplica` | Number of replica rebalancer-initiated additions attempted by the replicate queue -`queue.replicate.removedeadreplica` | Number of dead replica removals attempted by the replicate queue (typically in response to a node outage) -`queue.replicate.removereplica` | Number of replica removals attempted by the replicate queue (typically in response to a rebalancer-initiated addition) -`queue.replicate.transferlease` | Number of range lease transfers attempted by the replicate queue -`queue.split.pending` | Number of pending replicas in the split queue -`queue.split.process.failure` | Number of replicas which failed processing in the split queue -`queue.split.process.success` | Number of replicas successfully processed by the split queue -`queue.split.processingnanos` | Nanoseconds spent processing replicas in the split queue -`queue.tsmaintenance.pending` | Number of pending replicas in the time series maintenance queue -`queue.tsmaintenance.process.failure` | Number of replicas which failed processing in the time series maintenance queue -`queue.tsmaintenance.process.success` | Number of replicas successfully processed by the time series maintenance queue -`queue.tsmaintenance.processingnanos` | Nanoseconds spent processing replicas in the time series maintenance queue -`raft.commandsapplied` | Count of Raft commands applied -`raft.enqueued.pending` | Number of pending outgoing messages in the Raft Transport queue -`raft.heartbeats.pending` | Number of pending heartbeats and responses waiting to be coalesced -`raft.process.commandcommit.latency` | Latency histogram in nanoseconds for committing Raft commands -`raft.process.logcommit.latency` | Latency histogram in nanoseconds for committing Raft log entries -`raft.process.tickingnanos` | Nanoseconds spent in store.processRaft() processing replica.Tick() -`raft.process.workingnanos` | Nanoseconds spent in store.processRaft() working -`raft.rcvd.app` | Number of MsgApp messages received by this store -`raft.rcvd.appresp` | Number of MsgAppResp messages received by this store -`raft.rcvd.dropped` | Number of dropped incoming Raft messages -`raft.rcvd.heartbeat` | Number of (coalesced, if enabled) MsgHeartbeat messages received by this store -`raft.rcvd.heartbeatresp` | Number of (coalesced, if enabled) MsgHeartbeatResp messages received by this store -`raft.rcvd.prevote` | Number of MsgPreVote messages received by this store -`raft.rcvd.prevoteresp` | Number of MsgPreVoteResp messages received by this store -`raft.rcvd.prop` | Number of MsgProp messages received by this store -`raft.rcvd.snap` | Number of MsgSnap messages received by this store -`raft.rcvd.timeoutnow` | Number of MsgTimeoutNow messages received by this store -`raft.rcvd.transferleader` | Number of MsgTransferLeader messages received by this store -`raft.rcvd.vote` | Number of MsgVote messages received by this store -`raft.rcvd.voteresp` | Number of MsgVoteResp messages received by this store -`raft.ticks` | Number of Raft ticks queued -`raftlog.behind` | Number of Raft log entries followers on other stores are behind -`raftlog.truncated` | Number of Raft log entries truncated -`range.adds` | Number of range additions -`range.raftleadertransfers` | Number of Raft leader transfers -`range.removes` | Number of range removals -`range.snapshots.generated` | Number of generated snapshots -`range.snapshots.normal-applied` | Number of applied snapshots -`range.snapshots.preemptive-applied` | Number of applied preemptive snapshots -`range.snapshots.rcvd-bytes` | Number of snapshot bytes received -`range.snapshots.sent-bytes` | Number of snapshot bytes sent -`range.splits` | Number of range splits -`ranges.unavailable` | Number of ranges with fewer live replicas than needed for quorum -`ranges.underreplicated` | Number of ranges with fewer live replicas than the replication target -`ranges` | Number of ranges -`rebalancing.writespersecond` | Number of keys written (i.e., applied by Raft) per second to the store, averaged over a large time period as used in rebalancing decisions -`replicas.commandqueue.combinedqueuesize` | Number of commands in all CommandQueues combined -`replicas.commandqueue.combinedreadcount` | Number of read-only commands in all CommandQueues combined -`replicas.commandqueue.combinedwritecount` | Number of read-write commands in all CommandQueues combined -`replicas.commandqueue.maxoverlaps` | Largest number of overlapping commands seen when adding to any CommandQueue -`replicas.commandqueue.maxreadcount` | Largest number of read-only commands in any CommandQueue -`replicas.commandqueue.maxsize` | Largest number of commands in any CommandQueue -`replicas.commandqueue.maxtreesize` | Largest number of intervals in any CommandQueue's interval tree -`replicas.commandqueue.maxwritecount` | Largest number of read-write commands in any CommandQueue -`replicas.leaders_invalid_lease` | Number of replicas that are Raft leaders whose lease is invalid -`replicas.leaders_not_leaseholders` | Number of replicas that are Raft leaders whose range lease is held by another store -`replicas.leaders` | Number of Raft leaders -`replicas.leaseholders` | Number of lease holders -`replicas.quiescent` | Number of quiesced replicas -`replicas.reserved` | Number of replicas reserved for snapshots -`replicas` | Number of replicas -`requests.backpressure.split` | Number of backpressured writes waiting on a Range split -`requests.slow.commandqueue` | Number of requests that have been stuck for a long time in the command queue -`requests.slow.distsender` | Number of requests that have been stuck for a long time in the dist sender -`requests.slow.lease` | Number of requests that have been stuck for a long time acquiring a lease -`requests.slow.raft` | Number of requests that have been stuck for a long time in Raft -`rocksdb.block.cache.hits` | Count of block cache hits -`rocksdb.block.cache.misses` | Count of block cache misses -`rocksdb.block.cache.pinned-usage` | Bytes pinned by the block cache -`rocksdb.block.cache.usage` | Bytes used by the block cache -`rocksdb.bloom.filter.prefix.checked` | Number of times the bloom filter was checked -`rocksdb.bloom.filter.prefix.useful` | Number of times the bloom filter helped avoid iterator creation -`rocksdb.compactions` | Number of table compactions -`rocksdb.flushes` | Number of table flushes -`rocksdb.memtable.total-size` | Current size of memtable in bytes -`rocksdb.num-sstables` | Number of storage engine SSTables -`rocksdb.read-amplification` | Number of disk reads per query -`rocksdb.table-readers-mem-estimate` | Memory used by index and filter blocks -`round-trip-latency` | Distribution of round-trip latencies with other nodes in nanoseconds -`security.certificate.expiration.ca` | Expiration timestamp in seconds since Unix epoch for the CA certificate. 0 means no certificate or error. -`security.certificate.expiration.node` | Expiration timestamp in seconds since Unix epoch for the node certificate. 0 means no certificate or error. -`sql.bytesin` | Number of sql bytes received -`sql.bytesout` | Number of sql bytes sent -`sql.conns` | Number of active sql connections -`sql.ddl.count` | Number of SQL DDL statements -`sql.delete.count` | Number of SQL DELETE statements -`sql.distsql.exec.latency` | Latency in nanoseconds of SQL statement executions running on the distributed execution engine. This metric does not include the time to parse and plan the statement. -`sql.distsql.flows.active` | Number of distributed SQL flows currently active -`sql.distsql.flows.total` | Number of distributed SQL flows executed -`sql.distsql.queries.active` | Number of distributed SQL queries currently active -`sql.distsql.queries.total` | Number of distributed SQL queries executed -`sql.distsql.select.count` | Number of DistSQL SELECT statements -`sql.distsql.service.latency` | Latency in nanoseconds of SQL statement executions running on the distributed execution engine, including the time to parse and plan the statement. -`sql.exec.latency` | Latency in nanoseconds of all SQL statement executions. This metric does not include the time to parse and plan the statement. -`sql.guardrails.max_row_size_err.count` | Number of times a large row violates the corresponding `sql.guardrails.max_row_size_err` limit. -`sql.guardrails.max_row_size_log.count` | Number of times a large row violates the corresponding `sql.guardrails.max_row_size_log` limit. -`sql.insert.count` | Number of SQL INSERT statements -`sql.mem.current` | Current sql statement memory usage -`sql.mem.distsql.current` | Current sql statement memory usage for distsql -`sql.mem.distsql.max` | Memory usage per sql statement for distsql -`sql.mem.max` | Memory usage per sql statement -`sql.mem.session.current` | Current sql session memory usage -`sql.mem.session.max` | Memory usage per sql session -`sql.mem.txn.current` | Current sql transaction memory usage -`sql.mem.txn.max` | Memory usage per sql transaction -`sql.misc.count` | Number of other SQL statements -`sql.pgwire_cancel.total` | Counter of the number of pgwire query cancel requests -`sql.pgwire_cancel.ignored` | Counter of the number of pgwire query cancel requests that were ignored due to rate limiting -`sql.pgwire_cancel.successful` | Counter of the number of pgwire query cancel requests that were successful -`sql.query.count` | Number of SQL queries -`sql.select.count` | Number of SQL SELECT statements -`sql.service.latency` | Latency in nanoseconds of SQL request execution, including the time to parse and plan the statement. -`sql.txn.abort.count` | Number of SQL transaction ABORT statements -`sql.txn.begin.count` | Number of SQL transaction BEGIN statements -`sql.txn.commit.count` | Number of SQL transaction COMMIT statements -`sql.txn.rollback.count` | Number of SQL transaction ROLLBACK statements -`sql.update.count` | Number of SQL UPDATE statements -`sys.cgo.allocbytes` | Current bytes of memory allocated by cgo -`sys.cgo.totalbytes` | Total bytes of memory allocated by cgo, but not released -`sys.cgocalls` | Total number of cgo call -`sys.cpu.sys.ns` | Total system cpu time in nanoseconds -`sys.cpu.sys.percent` | Current system cpu percentage -`sys.cpu.user.ns` | Total user cpu time in nanoseconds -`sys.cpu.user.percent` | Current user cpu percentage -`sys.fd.open` | Process open file descriptors -`sys.fd.softlimit` | Process open FD soft limit -`sys.gc.count` | Total number of GC runs -`sys.gc.pause.ns` | Total GC pause in nanoseconds -`sys.gc.pause.percent` | Current GC pause percentage -`sys.go.allocbytes` | Current bytes of memory allocated by go -`sys.go.totalbytes` | Total bytes of memory allocated by go, but not released -`sys.goroutines` | Current number of goroutines -`sys.rss` | Current process RSS -`sys.uptime` | Process uptime in seconds -`sysbytes` | Number of bytes in system KV pairs -`syscount` | Count of system KV pairs -`timeseries.write.bytes` | Total size in bytes of metric samples written to disk -`timeseries.write.errors` | Total errors encountered while attempting to write metrics to disk -`timeseries.write.samples` | Total number of metric samples written to disk -`totalbytes` | Total number of bytes taken up by keys and values including non-live data -`tscache.skl.read.pages` | Number of pages in the read timestamp cache -`tscache.skl.read.rotations` | Number of page rotations in the read timestamp cache -`tscache.skl.write.pages` | Number of pages in the write timestamp cache -`tscache.skl.write.rotations` | Number of page rotations in the write timestamp cache -`txn.abandons` | Number of abandoned KV transactions -`txn.aborts` | Number of aborted KV transactions -`txn.autoretries` | Number of automatic retries to avoid serializable restarts -`txn.commits1PC` | Number of committed one-phase KV transactions -`txn.commits` | Number of committed KV transactions (including 1PC) -`txn.durations` | KV transaction durations in nanoseconds -`txn.restarts.deleterange` | Number of restarts due to a forwarded commit timestamp and a DeleteRange command -`txn.restarts.possiblereplay` | Number of restarts due to possible replays of command batches at the storage layer -`txn.restarts.serializable` | Number of restarts due to a forwarded commit timestamp and isolation=SERIALIZABLE -`txn.restarts.writetooold` | Number of restarts due to a concurrent writer committing first -`txn.restarts` | Number of restarted KV transactions -`valbytes` | Number of bytes taken up by values -`valcount` | Count of all values diff --git a/src/current/_includes/v22.1/misc/available-capacity-metric.md b/src/current/_includes/v22.1/misc/available-capacity-metric.md deleted file mode 100644 index 61dbcb9cbf2..00000000000 --- a/src/current/_includes/v22.1/misc/available-capacity-metric.md +++ /dev/null @@ -1 +0,0 @@ -If you are testing your deployment locally with multiple CockroachDB nodes running on a single machine (this is [not recommended in production](recommended-production-settings.html#topology)), you must explicitly [set the store size](cockroach-start.html#store) per node in order to display the correct capacity. Otherwise, the machine's actual disk capacity will be counted as a separate store for each node, thus inflating the computed capacity. \ No newline at end of file diff --git a/src/current/_includes/v22.1/misc/aws-locations.md b/src/current/_includes/v22.1/misc/aws-locations.md deleted file mode 100644 index 8b073c1f230..00000000000 --- a/src/current/_includes/v22.1/misc/aws-locations.md +++ /dev/null @@ -1,18 +0,0 @@ -| Location | SQL Statement | -| ------ | ------ | -| US East (N. Virginia) | `INSERT into system.locations VALUES ('region', 'us-east-1', 37.478397, -76.453077)`| -| US East (Ohio) | `INSERT into system.locations VALUES ('region', 'us-east-2', 40.417287, -76.453077)` | -| US West (N. California) | `INSERT into system.locations VALUES ('region', 'us-west-1', 38.837522, -120.895824)` | -| US West (Oregon) | `INSERT into system.locations VALUES ('region', 'us-west-2', 43.804133, -120.554201)` | -| Canada (Central) | `INSERT into system.locations VALUES ('region', 'ca-central-1', 56.130366, -106.346771)` | -| EU (Frankfurt) | `INSERT into system.locations VALUES ('region', 'eu-central-1', 50.110922, 8.682127)` | -| EU (Ireland) | `INSERT into system.locations VALUES ('region', 'eu-west-1', 53.142367, -7.692054)` | -| EU (London) | `INSERT into system.locations VALUES ('region', 'eu-west-2', 51.507351, -0.127758)` | -| EU (Paris) | `INSERT into system.locations VALUES ('region', 'eu-west-3', 48.856614, 2.352222)` | -| Asia Pacific (Tokyo) | `INSERT into system.locations VALUES ('region', 'ap-northeast-1', 35.689487, 139.691706)` | -| Asia Pacific (Seoul) | `INSERT into system.locations VALUES ('region', 'ap-northeast-2', 37.566535, 126.977969)` | -| Asia Pacific (Osaka-Local) | `INSERT into system.locations VALUES ('region', 'ap-northeast-3', 34.693738, 135.502165)` | -| Asia Pacific (Singapore) | `INSERT into system.locations VALUES ('region', 'ap-southeast-1', 1.352083, 103.819836)` | -| Asia Pacific (Sydney) | `INSERT into system.locations VALUES ('region', 'ap-southeast-2', -33.86882, 151.209296)` | -| Asia Pacific (Mumbai) | `INSERT into system.locations VALUES ('region', 'ap-south-1', 19.075984, 72.877656)` | -| South America (São Paulo) | `INSERT into system.locations VALUES ('region', 'sa-east-1', -23.55052, -46.633309)` | diff --git a/src/current/_includes/v22.1/misc/azure-env-param.md b/src/current/_includes/v22.1/misc/azure-env-param.md deleted file mode 100644 index 29b5cb04f2d..00000000000 --- a/src/current/_includes/v22.1/misc/azure-env-param.md +++ /dev/null @@ -1 +0,0 @@ -The [Azure environment](https://learn.microsoft.com/en-us/azure/deployment-environments/concept-environments-key-concepts#environments) that the storage account belongs to. The accepted values are: `AZURECHINACLOUD`, `AZUREGERMANCLOUD`, `AZUREPUBLICCLOUD`, and [`AZUREUSGOVERNMENTCLOUD`](https://learn.microsoft.com/en-us/azure/azure-government/documentation-government-developer-guide). These are cloud environments that meet security, compliance, and data privacy requirements for the respective instance of Azure cloud. If the parameter is not specified, it will default to `AZUREPUBLICCLOUD`. \ No newline at end of file diff --git a/src/current/_includes/v22.1/misc/azure-locations.md b/src/current/_includes/v22.1/misc/azure-locations.md deleted file mode 100644 index 7119ff8b7cb..00000000000 --- a/src/current/_includes/v22.1/misc/azure-locations.md +++ /dev/null @@ -1,30 +0,0 @@ -| Location | SQL Statement | -| -------- | ------------- | -| eastasia (East Asia) | `INSERT into system.locations VALUES ('region', 'eastasia', 22.267, 114.188)` | -| southeastasia (Southeast Asia) | `INSERT into system.locations VALUES ('region', 'southeastasia', 1.283, 103.833)` | -| centralus (Central US) | `INSERT into system.locations VALUES ('region', 'centralus', 41.5908, -93.6208)` | -| eastus (East US) | `INSERT into system.locations VALUES ('region', 'eastus', 37.3719, -79.8164)` | -| eastus2 (East US 2) | `INSERT into system.locations VALUES ('region', 'eastus2', 36.6681, -78.3889)` | -| westus (West US) | `INSERT into system.locations VALUES ('region', 'westus', 37.783, -122.417)` | -| northcentralus (North Central US) | `INSERT into system.locations VALUES ('region', 'northcentralus', 41.8819, -87.6278)` | -| southcentralus (South Central US) | `INSERT into system.locations VALUES ('region', 'southcentralus', 29.4167, -98.5)` | -| northeurope (North Europe) | `INSERT into system.locations VALUES ('region', 'northeurope', 53.3478, -6.2597)` | -| westeurope (West Europe) | `INSERT into system.locations VALUES ('region', 'westeurope', 52.3667, 4.9)` | -| japanwest (Japan West) | `INSERT into system.locations VALUES ('region', 'japanwest', 34.6939, 135.5022)` | -| japaneast (Japan East) | `INSERT into system.locations VALUES ('region', 'japaneast', 35.68, 139.77)` | -| brazilsouth (Brazil South) | `INSERT into system.locations VALUES ('region', 'brazilsouth', -23.55, -46.633)` | -| australiaeast (Australia East) | `INSERT into system.locations VALUES ('region', 'australiaeast', -33.86, 151.2094)` | -| australiasoutheast (Australia Southeast) | `INSERT into system.locations VALUES ('region', 'australiasoutheast', -37.8136, 144.9631)` | -| southindia (South India) | `INSERT into system.locations VALUES ('region', 'southindia', 12.9822, 80.1636)` | -| centralindia (Central India) | `INSERT into system.locations VALUES ('region', 'centralindia', 18.5822, 73.9197)` | -| westindia (West India) | `INSERT into system.locations VALUES ('region', 'westindia', 19.088, 72.868)` | -| canadacentral (Canada Central) | `INSERT into system.locations VALUES ('region', 'canadacentral', 43.653, -79.383)` | -| canadaeast (Canada East) | `INSERT into system.locations VALUES ('region', 'canadaeast', 46.817, -71.217)` | -| uksouth (UK South) | `INSERT into system.locations VALUES ('region', 'uksouth', 50.941, -0.799)` | -| ukwest (UK West) | `INSERT into system.locations VALUES ('region', 'ukwest', 53.427, -3.084)` | -| westcentralus (West Central US) | `INSERT into system.locations VALUES ('region', 'westcentralus', 40.890, -110.234)` | -| westus2 (West US 2) | `INSERT into system.locations VALUES ('region', 'westus2', 47.233, -119.852)` | -| koreacentral (Korea Central) | `INSERT into system.locations VALUES ('region', 'koreacentral', 37.5665, 126.9780)` | -| koreasouth (Korea South) | `INSERT into system.locations VALUES ('region', 'koreasouth', 35.1796, 129.0756)` | -| francecentral (France Central) | `INSERT into system.locations VALUES ('region', 'francecentral', 46.3772, 2.3730)` | -| francesouth (France South) | `INSERT into system.locations VALUES ('region', 'francesouth', 43.8345, 2.1972)` | diff --git a/src/current/_includes/v22.1/misc/basic-terms.md b/src/current/_includes/v22.1/misc/basic-terms.md deleted file mode 100644 index 231e29af81f..00000000000 --- a/src/current/_includes/v22.1/misc/basic-terms.md +++ /dev/null @@ -1,12 +0,0 @@ -## CockroachDB architecture terms - -Term | Definition ------|------------ -**cluster** | A group of interconnected storage nodes that collaboratively organize transactions, fault tolerance, and data rebalancing. -**node** | An individual instance of CockroachDB. One or more nodes form a cluster. -**range** | CockroachDB stores all user data (tables, indexes, etc.) and almost all system data in a sorted map of key-value pairs. This keyspace is divided into contiguous chunks called _ranges_, such that every key is found in one range.

From a SQL perspective, a table and its secondary indexes initially map to a single range, where each key-value pair in the range represents a single row in the table (also called the _primary index_ because the table is sorted by the primary key) or a single row in a secondary index. As soon as the size of a range reaches 512 MiB ([the default](../configure-replication-zones.html#range-max-bytes)), it is split into two ranges. This process continues for these new ranges as the table and its indexes continue growing. -**replica** | A copy of a range stored on a node. By default, there are three [replicas](../configure-replication-zones.html#num_replicas) of each range on different nodes. -**leaseholder** | The replica that holds the "range lease." This replica receives and coordinates all read and write requests for the range.

For most types of tables and queries, the leaseholder is the only replica that can serve consistent reads (reads that return "the latest" data). -**Raft protocol** | The [consensus protocol](replication-layer.html#raft) employed in CockroachDB that ensures that your data is safely stored on multiple nodes and that those nodes agree on the current state even if some of them are temporarily disconnected. -**Raft leader** | For each range, the replica that is the "leader" for write requests. The leader uses the Raft protocol to ensure that a majority of replicas (the leader and enough followers) agree, based on their Raft logs, before committing the write. The Raft leader is almost always the same replica as the leaseholder. -**Raft log** | A time-ordered log of writes to a range that its replicas have agreed on. This log exists on-disk with each replica and is the range's source of truth for consistent replication. diff --git a/src/current/_includes/v22.1/misc/beta-release-warning.md b/src/current/_includes/v22.1/misc/beta-release-warning.md deleted file mode 100644 index c228f650d04..00000000000 --- a/src/current/_includes/v22.1/misc/beta-release-warning.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -Beta releases are intended for testing and experimentation only. Beta releases are not recommended for production use, as they can lead to data corruption, cluster unavailability, performance issues, etc. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/misc/beta-warning.md b/src/current/_includes/v22.1/misc/beta-warning.md deleted file mode 100644 index 107fc2bfa4b..00000000000 --- a/src/current/_includes/v22.1/misc/beta-warning.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -**This is a beta feature.** It is currently undergoing continued testing. Please [file a Github issue](file-an-issue.html) with us if you identify a bug. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/misc/chrome-localhost.md b/src/current/_includes/v22.1/misc/chrome-localhost.md deleted file mode 100644 index d794ff339d0..00000000000 --- a/src/current/_includes/v22.1/misc/chrome-localhost.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -If you are using Google Chrome, and you are getting an error about not being able to reach `localhost` because its certificate has been revoked, go to chrome://flags/#allow-insecure-localhost, enable "Allow invalid certificates for resources loaded from localhost", and then restart the browser. Enabling this Chrome feature degrades security for all sites running on `localhost`, not just CockroachDB's DB Console, so be sure to enable the feature only temporarily. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/misc/client-side-intervention-example.md b/src/current/_includes/v22.1/misc/client-side-intervention-example.md deleted file mode 100644 index d0bbfc33695..00000000000 --- a/src/current/_includes/v22.1/misc/client-side-intervention-example.md +++ /dev/null @@ -1,28 +0,0 @@ -The Python-like pseudocode below shows how to implement an application-level retry loop; it does not require your driver or ORM to implement [advanced retry handling logic](advanced-client-side-transaction-retries.html), so it can be used from any programming language or environment. In particular, your retry loop must: - -- Raise an error if the `max_retries` limit is reached -- Retry on `40001` error codes -- [`COMMIT`](commit-transaction.html) at the end of the `try` block -- Implement [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) logic as shown below for best performance - -~~~ python -while true: - n++ - if n == max_retries: - throw Error("did not succeed within N retries") - try: - # add logic here to run all your statements - conn.exec('COMMIT') - break - catch error: - if error.code != "40001": - throw error - else: - # This is a retry error, so we roll back the current transaction - # and sleep for a bit before retrying. The sleep time increases - # for each failed transaction. Adapted from - # https://colintemple.com/2017/03/java-exponential-backoff/ - conn.exec('ROLLBACK'); - sleep_ms = int(((2**n) * 100) + rand( 100 - 1 ) + 1) - sleep(sleep_ms) # Assumes your sleep() takes milliseconds -~~~ diff --git a/src/current/_includes/v22.1/misc/csv-import-callout.md b/src/current/_includes/v22.1/misc/csv-import-callout.md deleted file mode 100644 index 60555c5d0b6..00000000000 --- a/src/current/_includes/v22.1/misc/csv-import-callout.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -The column order in your schema must match the column order in the file being imported. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v22.1/misc/customizing-the-savepoint-name.md b/src/current/_includes/v22.1/misc/customizing-the-savepoint-name.md deleted file mode 100644 index ed895f906f3..00000000000 --- a/src/current/_includes/v22.1/misc/customizing-the-savepoint-name.md +++ /dev/null @@ -1,5 +0,0 @@ -Set the `force_savepoint_restart` [session variable](set-vars.html#supported-variables) to `true` to enable using a custom name for the [retry savepoint](advanced-client-side-transaction-retries.html#retry-savepoints). - -Once this variable is set, the [`SAVEPOINT`](savepoint.html) statement will accept any name for the retry savepoint, not just `cockroach_restart`. In addition, it causes every savepoint name to be equivalent to `cockroach_restart`, therefore disallowing the use of [nested transactions](transactions.html#nested-transactions). - -This feature exists to support applications that want to use the [advanced client-side transaction retry protocol](advanced-client-side-transaction-retries.html), but cannot customize the name of savepoints to be `cockroach_restart`. For example, this may be necessary because you are using an ORM that requires its own names for savepoints. diff --git a/src/current/_includes/v22.1/misc/database-terms.md b/src/current/_includes/v22.1/misc/database-terms.md deleted file mode 100644 index 11d9bd67c92..00000000000 --- a/src/current/_includes/v22.1/misc/database-terms.md +++ /dev/null @@ -1,10 +0,0 @@ -## Database terms - -Term | Definition ------|----------- -**consistency** | The requirement that a transaction must change affected data only in allowed ways. CockroachDB uses "consistency" in both the sense of [ACID semantics](https://en.wikipedia.org/wiki/ACID) and the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem), albeit less formally than either definition. -**isolation** | The degree to which a transaction may be affected by other transactions running at the same time. CockroachDB provides the [`SERIALIZABLE`](https://en.wikipedia.org/wiki/Serializability) isolation level, which is the highest possible and guarantees that every committed transaction has the same result as if each transaction were run one at a time. -**consensus** | The process of reaching agreement on whether a transaction is committed or aborted. CockroachDB uses the [Raft consensus protocol](#architecture-raft). In CockroachDB, when a range receives a write, a quorum of nodes containing replicas of the range acknowledge the write. This means your data is safely stored and a majority of nodes agree on the database's current state, even if some of the nodes are offline.

When a write does not achieve consensus, forward progress halts to maintain consistency within the cluster. -**replication** | The process of creating and distributing copies of data, as well as ensuring that those copies remain consistent. CockroachDB requires all writes to propagate to a [quorum](https://en.wikipedia.org/wiki/Quorum_%28distributed_computing%29) of copies of the data before being considered committed. This ensures the consistency of your data. -**transaction** | A set of operations performed on a database that satisfy the requirements of [ACID semantics](https://en.wikipedia.org/wiki/ACID). This is a crucial feature for a consistent system to ensure developers can trust the data in their database. For more information about how transactions work in CockroachDB, see [Transaction Layer](transaction-layer.html). -**multi-active availability** | A consensus-based notion of high availability that lets each node in the cluster handle reads and writes for a subset of the stored data (on a per-range basis). This is in contrast to _active-passive replication_, in which the active node receives 100% of request traffic, and _active-active_ replication, in which all nodes accept requests but typically cannot guarantee that reads are both up-to-date and fast. diff --git a/src/current/_includes/v22.1/misc/debug-subcommands.md b/src/current/_includes/v22.1/misc/debug-subcommands.md deleted file mode 100644 index 4f6f7d1c678..00000000000 --- a/src/current/_includes/v22.1/misc/debug-subcommands.md +++ /dev/null @@ -1,5 +0,0 @@ -While the `cockroach debug` command has a few subcommands, users are expected to use only the [`zip`](cockroach-debug-zip.html), [`encryption-active-key`](cockroach-debug-encryption-active-key.html), [`merge-logs`](cockroach-debug-merge-logs.html), [`list-files`](cockroach-debug-list-files.html), [`tsdump`](cockroach-debug-tsdump.html), and [`ballast`](cockroach-debug-ballast.html) subcommands. - -We recommend using the [`job-trace`](cockroach-debug-job-trace.html) subcommand only when directed by the [Cockroach Labs support team](support-resources.html). - -The other `debug` subcommands are useful only to CockroachDB's developers and contributors. diff --git a/src/current/_includes/v22.1/misc/declarative-schema-changer-note.md b/src/current/_includes/v22.1/misc/declarative-schema-changer-note.md deleted file mode 100644 index fa0f2d33ed8..00000000000 --- a/src/current/_includes/v22.1/misc/declarative-schema-changer-note.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -`{{ page.title }}` now uses the [declarative schema changer](online-schema-changes.html#declarative-schema-changer) by default. Declarative schema changer statements and legacy schema changer statements operating on the same objects cannot exist within the same transaction. Either split the transaction into multiple transactions, or disable either the `sql.defaults.use_declarative_schema_changer` [cluster setting](cluster-settings.html) or the `use_declarative_schema_changer` [session variable](set-vars.html). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/misc/delete-statistics.md b/src/current/_includes/v22.1/misc/delete-statistics.md deleted file mode 100644 index 3e4c71db3ec..00000000000 --- a/src/current/_includes/v22.1/misc/delete-statistics.md +++ /dev/null @@ -1,15 +0,0 @@ -To delete statistics for all tables in all databases: - -{% include_cached copy-clipboard.html %} -~~~ sql -DELETE FROM system.table_statistics WHERE true; -~~~ - -To delete a named set of statistics (e.g, one named "users_stats"), run a query like the following: - -{% include_cached copy-clipboard.html %} -~~~ sql -DELETE FROM system.table_statistics WHERE name = 'users_stats'; -~~~ - -For more information about the `DELETE` statement, see [`DELETE`](delete.html). diff --git a/src/current/_includes/v22.1/misc/diagnostics-callout.html b/src/current/_includes/v22.1/misc/diagnostics-callout.html deleted file mode 100644 index a969a8cf152..00000000000 --- a/src/current/_includes/v22.1/misc/diagnostics-callout.html +++ /dev/null @@ -1 +0,0 @@ -{{site.data.alerts.callout_info}}By default, each node of a CockroachDB cluster periodically shares anonymous usage details with Cockroach Labs. For an explanation of the details that get shared and how to opt-out of reporting, see Diagnostics Reporting.{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/misc/enterprise-features.md b/src/current/_includes/v22.1/misc/enterprise-features.md deleted file mode 100644 index 9534c0ce442..00000000000 --- a/src/current/_includes/v22.1/misc/enterprise-features.md +++ /dev/null @@ -1,21 +0,0 @@ -## Cluster optimization - -Feature | Description ---------+------------------------- -[Follower Reads](follower-reads.html) | Reduce read latency in multi-region deployments by using the closest replica at the expense of reading slightly historical data. -[Multi-Region Capabilities](multiregion-overview.html) | Row-level control over where your data is stored to help you reduce read and write latency and meet regulatory requirements. -[Node Map](enable-node-map.html) | Visualize the geographical distribution of a cluster by plotting its node localities on a world map. - -## Recovery and streaming - -Feature | Description ---------+------------------------- -Enterprise [`BACKUP`](backup.html) and restore capabilities | Taking and restoring [incremental backups](take-full-and-incremental-backups.html), [backups with revision history](take-backups-with-revision-history-and-restore-from-a-point-in-time.html), [locality-aware backups](take-and-restore-locality-aware-backups.html), and [encrypted backups](take-and-restore-encrypted-backups.html) require an Enterprise license. [Full backups](take-full-and-incremental-backups.html) do not require an Enterprise license. -[Changefeeds into a Configurable Sink](create-changefeed.html) | For every change in a configurable allowlist of tables, configure a changefeed to emit a record to a configurable sink: Apache Kafka, cloud storage, Google Cloud Pub/Sub, or a webhook sink. These records can be processed by downstream systems for reporting, caching, or full-text indexing. - -## Security and IAM - -Feature | Description ---------+------------------------- -[Encryption at Rest](security-reference/encryption.html#encryption-at-rest-enterprise) | Enable automatic transparent encryption of a node's data on the local disk using AES in counter mode, with all key sizes allowed. This feature works together with CockroachDB's automatic encryption of data in transit. -[GSSAPI with Kerberos Authentication](gssapi_authentication.html) | Authenticate to your cluster using identities stored in an external enterprise directory system that supports Kerberos, such as Active Directory. diff --git a/src/current/_includes/v22.1/misc/explore-benefits-see-also.md b/src/current/_includes/v22.1/misc/explore-benefits-see-also.md deleted file mode 100644 index 6b1a3afed71..00000000000 --- a/src/current/_includes/v22.1/misc/explore-benefits-see-also.md +++ /dev/null @@ -1,7 +0,0 @@ -- [Replication & Rebalancing](demo-replication-and-rebalancing.html) -- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html) -- [Low Latency Multi-Region Deployment](demo-low-latency-multi-region-deployment.html) -- [Serializable Transactions](demo-serializable.html) -- [Cross-Cloud Migration](demo-automatic-cloud-migration.html) -- [Orchestration](orchestrate-a-local-cluster-with-kubernetes-insecure.html) -- [JSON Support](demo-json-support.html) diff --git a/src/current/_includes/v22.1/misc/force-index-selection.md b/src/current/_includes/v22.1/misc/force-index-selection.md deleted file mode 100644 index 5a14daa6f2a..00000000000 --- a/src/current/_includes/v22.1/misc/force-index-selection.md +++ /dev/null @@ -1,145 +0,0 @@ -By using the explicit index annotation, you can override [CockroachDB's index selection](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/) and use a specific [index](indexes.html) when reading from a named table. - -{{site.data.alerts.callout_info}} -Index selection can impact [performance](performance-best-practices-overview.html), but does not change the result of a query. -{{site.data.alerts.end}} - -##### Force index scan - -The syntax to force a scan of a specific index is: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM table@my_idx; -~~~ - -This is equivalent to the longer expression: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM table@{FORCE_INDEX=my_idx}; -~~~ - -##### Force reverse scan - -The syntax to force a reverse scan of a specific index is: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM table@{FORCE_INDEX=my_idx,DESC}; -~~~ - -Forcing a reverse scan is sometimes useful during [performance tuning](performance-best-practices-overview.html). For reference, the full syntax for choosing an index and its scan direction is - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT * FROM table@{FORCE_INDEX=idx[,DIRECTION]} -~~~ - -where the optional `DIRECTION` is either `ASC` (ascending) or `DESC` (descending). - -When a direction is specified, that scan direction is forced; otherwise the [cost-based optimizer](cost-based-optimizer.html) is free to choose the direction it calculates will result in the best performance. - -You can verify that the optimizer is choosing your desired scan direction using [`EXPLAIN (OPT)`](explain.html#opt-option). For example, given the table - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE kv (K INT PRIMARY KEY, v INT); -~~~ - -you can check the scan direction with: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN (opt) SELECT * FROM users@{FORCE_INDEX=primary,DESC}; -~~~ - -~~~ - text -+-------------------------------------+ - scan users,rev - └── flags: force-index=primary,rev -(2 rows) -~~~ - -##### Force partial index scan - -To force a [partial index scan](partial-indexes.html), your statement must have a `WHERE` clause that implies the partial index filter. - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE t ( - a INT, - INDEX idx (a) WHERE a > 0); -INSERT INTO t(a) VALUES (5); -SELECT * FROM t@idx WHERE a > 0; -~~~ - -~~~ -CREATE TABLE - -Time: 13ms total (execution 12ms / network 0ms) - -INSERT 1 - -Time: 22ms total (execution 21ms / network 0ms) - - a ------ - 5 -(1 row) - -Time: 1ms total (execution 1ms / network 0ms) -~~~ - -##### Force partial GIN index scan - -To force a [partial GIN index](inverted-indexes.html#partial-gin-indexes) scan, your statement must have a `WHERE` clause that: - -- Implies the partial index. -- Constrains the GIN index scan. - -{% include_cached copy-clipboard.html %} -~~~ sql -DROP TABLE t; -CREATE TABLE t ( - j JSON, - INVERTED INDEX idx (j) WHERE j->'a' = '1'); -INSERT INTO t(j) - VALUES ('{"a": 1}'), - ('{"a": 3, "b": 2}'), - ('{"a": 1, "b": 2}'); -SELECT * FROM t@idx WHERE j->'a' = '1' AND j->'b' = '2'; -~~~ - -~~~ -DROP TABLE - -Time: 68ms total (execution 22ms / network 45ms) - -CREATE TABLE - -Time: 10ms total (execution 10ms / network 0ms) - -INSERT 3 - -Time: 22ms total (execution 22ms / network 0ms) - - j --------------------- - {"a": 1, "b": 2} -(1 row) - -Time: 1ms total (execution 1ms / network 0ms) -~~~ - -##### Prevent full scan - -To prevent the optimizer from planning a full scan for a table, specify the `NO_FULL_SCAN` index hint. For example: - -~~~sql -SELECT * FROM table_name@{NO_FULL_SCAN}; -~~~ - -To prevent a full scan of a [partial index](#force-partial-index-scan), you must specify `NO_FULL_SCAN` _in combination with_ the partial index using `FORCE_INDEX=index_name`. -If you specify only `NO_FULL_SCAN`, a full scan of a partial index may be planned. diff --git a/src/current/_includes/v22.1/misc/gce-locations.md b/src/current/_includes/v22.1/misc/gce-locations.md deleted file mode 100644 index 22122aae78d..00000000000 --- a/src/current/_includes/v22.1/misc/gce-locations.md +++ /dev/null @@ -1,18 +0,0 @@ -| Location | SQL Statement | -| ------ | ------ | -| us-east1 (South Carolina) | `INSERT into system.locations VALUES ('region', 'us-east1', 33.836082, -81.163727)` | -| us-east4 (N. Virginia) | `INSERT into system.locations VALUES ('region', 'us-east4', 37.478397, -76.453077)` | -| us-central1 (Iowa) | `INSERT into system.locations VALUES ('region', 'us-central1', 42.032974, -93.581543)` | -| us-west1 (Oregon) | `INSERT into system.locations VALUES ('region', 'us-west1', 43.804133, -120.554201)` | -| northamerica-northeast1 (Montreal) | `INSERT into system.locations VALUES ('region', 'northamerica-northeast1', 56.130366, -106.346771)` | -| europe-west1 (Belgium) | `INSERT into system.locations VALUES ('region', 'europe-west1', 50.44816, 3.81886)` | -| europe-west2 (London) | `INSERT into system.locations VALUES ('region', 'europe-west2', 51.507351, -0.127758)` | -| europe-west3 (Frankfurt) | `INSERT into system.locations VALUES ('region', 'europe-west3', 50.110922, 8.682127)` | -| europe-west4 (Netherlands) | `INSERT into system.locations VALUES ('region', 'europe-west4', 53.4386, 6.8355)` | -| europe-west6 (Zürich) | `INSERT into system.locations VALUES ('region', 'europe-west6', 47.3769, 8.5417)` | -| asia-east1 (Taiwan) | `INSERT into system.locations VALUES ('region', 'asia-east1', 24.0717, 120.5624)` | -| asia-northeast1 (Tokyo) | `INSERT into system.locations VALUES ('region', 'asia-northeast1', 35.689487, 139.691706)` | -| asia-southeast1 (Singapore) | `INSERT into system.locations VALUES ('region', 'asia-southeast1', 1.352083, 103.819836)` | -| australia-southeast1 (Sydney) | `INSERT into system.locations VALUES ('region', 'australia-southeast1', -33.86882, 151.209296)` | -| asia-south1 (Mumbai) | `INSERT into system.locations VALUES ('region', 'asia-south1', 19.075984, 72.877656)` | -| southamerica-east1 (São Paulo) | `INSERT into system.locations VALUES ('region', 'southamerica-east1', -23.55052, -46.633309)` | diff --git a/src/current/_includes/v22.1/misc/geojson_geometry_note.md b/src/current/_includes/v22.1/misc/geojson_geometry_note.md deleted file mode 100644 index ba5fe199657..00000000000 --- a/src/current/_includes/v22.1/misc/geojson_geometry_note.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -The screenshots in these examples were generated using [geojson.io](http://geojson.io), but they are designed to showcase the shapes, not the map. Representing `GEOMETRY` data in GeoJSON can lead to unexpected results if using geometries with [SRIDs](spatial-glossary.html#srid) other than 4326 (as shown below). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/misc/haproxy.md b/src/current/_includes/v22.1/misc/haproxy.md deleted file mode 100644 index 375af8e937d..00000000000 --- a/src/current/_includes/v22.1/misc/haproxy.md +++ /dev/null @@ -1,39 +0,0 @@ -By default, the generated configuration file is called `haproxy.cfg` and looks as follows, with the `server` addresses pre-populated correctly: - - ~~~ - global - maxconn 4096 - - defaults - mode tcp - # Timeout values should be configured for your specific use. - # See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect - timeout connect 10s - timeout client 1m - timeout server 1m - # TCP keep-alive on client side. Server already enables them. - option clitcpka - - listen psql - bind :26257 - mode tcp - balance roundrobin - option httpchk GET /health?ready=1 - server cockroach1 :26257 check port 8080 - server cockroach2 :26257 check port 8080 - server cockroach3 :26257 check port 8080 - ~~~ - - The file is preset with the minimal [configurations](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html) needed to work with your running cluster: - - Field | Description - ------|------------ - `timeout connect`
`timeout client`
`timeout server` | Timeout values that should be suitable for most deployments. - `bind` | The port that HAProxy listens on. This is the port clients will connect to and thus needs to be allowed by your network configuration.

This tutorial assumes HAProxy is running on a separate machine from CockroachDB nodes. If you run HAProxy on the same machine as a node (not recommended), you'll need to change this port, as `26257` is likely already being used by the CockroachDB node. - `balance` | The balancing algorithm. This is set to `roundrobin` to ensure that connections get rotated amongst nodes (connection 1 on node 1, connection 2 on node 2, etc.). Check the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-balance) for details about this and other balancing algorithms. - `option httpchk` | The HTTP endpoint that HAProxy uses to check node health. [`/health?ready=1`](monitoring-and-alerting.html#health-ready-1) ensures that HAProxy doesn't direct traffic to nodes that are live but not ready to receive requests. - `server` | For each included node, this field specifies the address the node advertises to other nodes in the cluster, i.e., the addressed pass in the [`--advertise-addr` flag](cockroach-start.html#networking) on node startup. Make sure hostnames are resolvable and IP addresses are routable from HAProxy. - - {{site.data.alerts.callout_info}} - For full details on these and other configuration settings, see the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html). - {{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/misc/htpp-import-only.md b/src/current/_includes/v22.1/misc/htpp-import-only.md deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/src/current/_includes/v22.1/misc/import-perf.md b/src/current/_includes/v22.1/misc/import-perf.md deleted file mode 100644 index b0520a9c392..00000000000 --- a/src/current/_includes/v22.1/misc/import-perf.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_success}} -For best practices for optimizing import performance in CockroachDB, see [Import Performance Best Practices](import-performance-best-practices.html). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/misc/index-storage-parameters.md b/src/current/_includes/v22.1/misc/index-storage-parameters.md deleted file mode 100644 index 174b50e8bf2..00000000000 --- a/src/current/_includes/v22.1/misc/index-storage-parameters.md +++ /dev/null @@ -1,14 +0,0 @@ -| Parameter name | Description | Data type | Default value -|---------------------+----------------------|-----|------| -| `bucket_count` | The number of buckets into which a [hash-sharded index](hash-sharded-indexes.html) will split. | Integer | The value of the `sql.defaults.default_hash_sharded_index_bucket_count` [cluster setting](cluster-settings.html). | -| `geometry_max_x` | The maximum X-value of the [spatial reference system](spatial-glossary.html#spatial-reference-system) for the object(s) being covered. This only needs to be set if you are using a custom [SRID](spatial-glossary.html#srid). | | Derived from SRID bounds, else `(1 << 31) -1`. | -| `geometry_max_y` | The maximum Y-value of the [spatial reference system](spatial-glossary.html#spatial-reference-system) for the object(s) being covered. This only needs to be set if you are using a custom [SRID](spatial-glossary.html#srid). | | Derived from SRID bounds, else `(1 << 31) -1`. | -| `geometry_min_x` | The minimum X-value of the [spatial reference system](spatial-glossary.html#spatial-reference-system) for the object(s) being covered. This only needs to be set if the default bounds of the SRID are too large/small for the given data, or SRID = 0 and you wish to use a smaller range (unfortunately this is currently not exposed, but is viewable on ). By default, SRID = 0 assumes `[-min int32, max int32]` ranges. | | Derived from SRID bounds, else `-(1 << 31)`. | -| `geometry_min_y` | The minimum Y-value of the [spatial reference system](spatial-glossary.html#spatial-reference-system) for the object(s) being covered. This only needs to be set if you are using a custom [SRID](spatial-glossary.html#srid). | | Derived from SRID bounds, else `-(1 << 31)`. | -| `s2_level_mod` | `s2_max_level` must be divisible by `s2_level_mod`. `s2_level_mod` must be between `1` and `3`. | Integer | `1` | -| `s2_max_cells` | The maximum number of S2 cells used in the covering. Provides a limit on how much work is done exploring the possible coverings. Allowed values: `1-30`. You may want to use higher values for odd-shaped regions such as skinny rectangles. Used in [spatial indexes](spatial-indexes.html). | Integer | `4` | -| `s2_max_level` | The maximum level of S2 cell used in the covering. Allowed values: `1-30`. Setting it to less than the default means that CockroachDB will be forced to generate coverings using larger cells. Used in [spatial indexes](spatial-indexes.html). | Integer | `30` | - -The following parameters are included for PostgreSQL compatibility and do not affect how CockroachDB runs: - -- `fillfactor` diff --git a/src/current/_includes/v22.1/misc/install-next-steps.html b/src/current/_includes/v22.1/misc/install-next-steps.html deleted file mode 100644 index bb7a9ebc388..00000000000 --- a/src/current/_includes/v22.1/misc/install-next-steps.html +++ /dev/null @@ -1,16 +0,0 @@ - diff --git a/src/current/_includes/v22.1/misc/linux-binary-prereqs.md b/src/current/_includes/v22.1/misc/linux-binary-prereqs.md deleted file mode 100644 index 541183fe71b..00000000000 --- a/src/current/_includes/v22.1/misc/linux-binary-prereqs.md +++ /dev/null @@ -1 +0,0 @@ -

The CockroachDB binary for Linux requires glibc, libncurses, and tzdata, which are found by default on nearly all Linux distributions, with Alpine as the notable exception.

diff --git a/src/current/_includes/v22.1/misc/logging-defaults.md b/src/current/_includes/v22.1/misc/logging-defaults.md deleted file mode 100644 index 1a7ae68a536..00000000000 --- a/src/current/_includes/v22.1/misc/logging-defaults.md +++ /dev/null @@ -1,3 +0,0 @@ -By default, this command logs messages to `stderr`. This includes events with `WARNING` [severity](logging.html#logging-levels-severities) and higher. - -If you need to troubleshoot this command's behavior, you can [customize its logging behavior](configure-logs.html). \ No newline at end of file diff --git a/src/current/_includes/v22.1/misc/logging-flags.md b/src/current/_includes/v22.1/misc/logging-flags.md deleted file mode 100644 index eaadb6c8ddb..00000000000 --- a/src/current/_includes/v22.1/misc/logging-flags.md +++ /dev/null @@ -1,11 +0,0 @@ -Flag | Description ------|------------ -`--log` | Configure logging parameters by specifying a YAML payload. For details, see [Configure logs](configure-logs.html#flag). If a YAML configuration is not specified, the [default configuration](configure-logs.html#default-logging-configuration) is used.

`--log-config-file` can also be used.

**Note:** The deprecated logging flags below cannot be combined with `--log`, and can be defined instead in the YAML payload. -`--log-config-file` | Configure logging parameters by specifying a path to a YAML file. For details, see [Configure logs](configure-logs.html#flag). If a YAML configuration is not specified, the [default configuration](configure-logs.html#default-logging-configuration) is used.

`--log` can also be used.

**Note:** The deprecated logging flags below cannot be combined with `--log-config-file`, and can be defined instead in the YAML payload. -`--log-dir` | **Deprecated.** To enable logging to files and write logs to the specified directory, use [`--log`](configure-logs.html#flag) and set `dir` in the YAML configuration.

Setting `--log-dir` to a blank directory (`--log-dir=`) disables logging to files. Do not use `--log-dir=""`; this creates a new directory named `""` and stores log files in that directory. -`--log-group-max-size` | **Deprecated.** This is now configured with [`--log`](configure-logs.html#flag) or [`--log-config-file`](configure-logs.html#flag) and a YAML payload. After the logging group (i.e., `cockroach`, `cockroach-sql-audit`, `cockroach-auth`, `cockroach-sql-exec`, `cockroach-pebble`) reaches the specified size, delete the oldest log file. The flag's argument takes standard file sizes, such as `--log-group-max-size=1GiB`.

**Default**: 100MiB -`--log-file-max-size` | **Deprecated.** This is now configured with [`--log`](configure-logs.html#flag) or [`--log-config-file`](configure-logs.html#flag) and a YAML payload. After logs reach the specified size, begin writing logs to a new file. The flag's argument takes standard file sizes, such as `--log-file-max-size=2MiB`.

**Default**: 10MiB -`--log-file-verbosity` | **Deprecated.** This is now configured with [`--log`](configure-logs.html#flag) or [`--log-config-file`](configure-logs.html#flag) and a YAML payload. Only writes messages to log files if they are at or above the specified [severity level](logging.html#logging-levels-severities), such as `--log-file-verbosity=WARNING`. **Requires** logging to files.

**Default**: `INFO` -`--logtostderr` | **Deprecated.** This is now configured with [`--log`](configure-logs.html#flag) or [`--log-config-file`](configure-logs.html#flag) and a YAML payload. Enable logging to `stderr` for messages at or above the specified [severity level](logging.html#logging-levels-severities), such as `--logtostderr=ERROR`

If you use this flag without specifying the severity level (e.g., `cockroach start --logtostderr`), it prints messages of *all* severities to `stderr`.

Setting `--logtostderr=NONE` disables logging to `stderr`. -`--no-color` | Do not colorize `stderr`. Possible values: `true` or `false`.

When set to `false`, messages logged to `stderr` are colorized based on [severity level](logging.html#logging-levels-severities).

**Default:** `false` -`--sql-audit-dir` | **Deprecated.** This is now configured with [`--log`](configure-logs.html#flag) or [`--log-config-file`](configure-logs.html#flag) and a YAML payload. If non-empty, output the `SENSITIVE_ACCESS` [logging channel](logging-overview.html#logging-channels) to this directory.

Note that enabling `SENSITIVE_ACCESS` logs can negatively impact performance. As a result, we recommend using the `SENSITIVE_ACCESS` channel for security purposes only. For more information, see [Logging use cases](logging-use-cases.html#security-and-audit-monitoring). diff --git a/src/current/_includes/v22.1/misc/movr-live-demo.md b/src/current/_includes/v22.1/misc/movr-live-demo.md deleted file mode 100644 index f8cfb24cb21..00000000000 --- a/src/current/_includes/v22.1/misc/movr-live-demo.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_success}} -For a live demo of the deployed example application, see [https://movr.cloud](https://movr.cloud). -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v22.1/misc/movr-schema.md b/src/current/_includes/v22.1/misc/movr-schema.md deleted file mode 100644 index 3d9d1f77327..00000000000 --- a/src/current/_includes/v22.1/misc/movr-schema.md +++ /dev/null @@ -1,12 +0,0 @@ -The six tables in the `movr` database store user, vehicle, and ride data for MovR: - -Table | Description ---------|---------------------------- -`users` | People registered for the service. -`vehicles` | The pool of vehicles available for the service. -`rides` | When and where users have rented a vehicle. -`promo_codes` | Promotional codes for users. -`user_promo_codes` | Promotional codes in use by users. -`vehicle_location_histories` | Vehicle location history. - -Geo-partitioning schema diff --git a/src/current/_includes/v22.1/misc/movr-workflow.md b/src/current/_includes/v22.1/misc/movr-workflow.md deleted file mode 100644 index 948d95dc1de..00000000000 --- a/src/current/_includes/v22.1/misc/movr-workflow.md +++ /dev/null @@ -1,76 +0,0 @@ -The workflow for MovR is as follows: - -1. A user loads the app and sees the 25 closest vehicles. - - For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT id, city, status FROM vehicles WHERE city='amsterdam' limit 25; - ~~~ - -2. The user signs up for the service. - - For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO users (id, name, address, city, credit_card) - VALUES ('66666666-6666-4400-8000-00000000000f', 'Mariah Lam', '88194 Angela Gardens Suite 60', 'amsterdam', '123245696'); - ~~~ - - {{site.data.alerts.callout_info}}Usually for Universally Unique Identifier (UUID) you would need to generate it automatically but for the sake of this follow up we will use predetermined UUID to keep track of them in our examples.{{site.data.alerts.end}} - -3. In some cases, the user adds their own vehicle to share. - - For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO vehicles (id, city, type, owner_id,creation_time,status, current_location, ext) - VALUES ('ffffffff-ffff-4400-8000-00000000000f', 'amsterdam', 'skateboard', '66666666-6666-4400-8000-00000000000f', current_timestamp(), 'available', '88194 Angela Gardens Suite 60', '{"color": "blue"}'); - ~~~ -4. More often, the user reserves a vehicle and starts a ride, applying a promo code, if available and valid. - - For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT code FROM user_promo_codes WHERE user_id ='66666666-6666-4400-8000-00000000000f'; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > UPDATE vehicles SET status = 'in_use' WHERE id='bbbbbbbb-bbbb-4800-8000-00000000000b'; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO rides (id, city, vehicle_city, rider_id, vehicle_id, start_address,end_address, start_time, end_time, revenue) - VALUES ('cd032f56-cf1a-4800-8000-00000000066f', 'amsterdam', 'amsterdam', '66666666-6666-4400-8000-00000000000f', 'bbbbbbbb-bbbb-4800-8000-00000000000b', '70458 Mary Crest', '', TIMESTAMP '2020-10-01 10:00:00.123456', NULL, 0.0); - ~~~ - -5. During the ride, MovR tracks the location of the vehicle. - - For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO vehicle_location_histories (city, ride_id, timestamp, lat, long) - VALUES ('amsterdam', 'cd032f56-cf1a-4800-8000-00000000066f', current_timestamp(), -101, 60); - ~~~ - -6. The user ends the ride and releases the vehicle. - - For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > UPDATE vehicles SET status = 'available' WHERE id='bbbbbbbb-bbbb-4800-8000-00000000000b'; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > UPDATE rides SET end_address ='33862 Charles Junctions Apt. 49', end_time=TIMESTAMP '2020-10-01 10:30:00.123456', revenue=88.6 - WHERE id='cd032f56-cf1a-4800-8000-00000000066f'; - ~~~ diff --git a/src/current/_includes/v22.1/misc/multiregion-max-offset.md b/src/current/_includes/v22.1/misc/multiregion-max-offset.md deleted file mode 100644 index 07a0dab59c3..00000000000 --- a/src/current/_includes/v22.1/misc/multiregion-max-offset.md +++ /dev/null @@ -1 +0,0 @@ -For new clusters using the [multi-region SQL abstractions](multiregion-overview.html), Cockroach Labs recommends lowering the [`--max-offset`](cockroach-start.html#flags-max-offset) setting to `250ms`. This setting is especially helpful for lowering the write latency of [global tables](multiregion-overview.html#global-tables). For existing clusters, changing the setting will require restarting all of the nodes in your cluster at the same time; it cannot be done with a rolling restart. diff --git a/src/current/_includes/v22.1/misc/non-http-source-privileges.md b/src/current/_includes/v22.1/misc/non-http-source-privileges.md deleted file mode 100644 index dfea2d411e2..00000000000 --- a/src/current/_includes/v22.1/misc/non-http-source-privileges.md +++ /dev/null @@ -1,12 +0,0 @@ -The source file URL does **not** require the [`admin` role](security-reference/authorization.html#admin-role) in the following scenarios: - -- S3 and GS using `SPECIFIED` (and not `IMPLICIT`) credentials. Azure is always `SPECIFIED` by default. -- [Userfile](use-userfile-for-bulk-operations.html) - -The source file URL **does** require the [`admin` role](security-reference/authorization.html#admin-role) in the following scenarios: - -- S3 or GS using `IMPLICIT` credentials -- Use of a [custom endpoint](https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/) on S3 -- [Nodelocal](cockroach-nodelocal-upload.html) - -We recommend using [cloud storage for bulk operations](use-cloud-storage-for-bulk-operations.html). diff --git a/src/current/_includes/v22.1/misc/remove-user-callout.html b/src/current/_includes/v22.1/misc/remove-user-callout.html deleted file mode 100644 index 925f83d779d..00000000000 --- a/src/current/_includes/v22.1/misc/remove-user-callout.html +++ /dev/null @@ -1 +0,0 @@ -Removing a user does not remove that user's privileges. Therefore, to prevent a future user with an identical username from inheriting an old user's privileges, it's important to revoke a user's privileges before or after removing the user. diff --git a/src/current/_includes/v22.1/misc/s3-compatible-warning.md b/src/current/_includes/v22.1/misc/s3-compatible-warning.md deleted file mode 100644 index 1e12b5611d3..00000000000 --- a/src/current/_includes/v22.1/misc/s3-compatible-warning.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -While Cockroach Labs actively tests Amazon S3, Google Cloud Storage, and Azure Storage, we **do not** test [S3-compatible services](use-cloud-storage-for-bulk-operations.html) (e.g., [MinIO](https://min.io/), [Red Hat Ceph](https://docs.ceph.com/en/pacific/radosgw/s3/)). -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v22.1/misc/schema-change-stmt-note.md b/src/current/_includes/v22.1/misc/schema-change-stmt-note.md deleted file mode 100644 index 576fa59a39c..00000000000 --- a/src/current/_includes/v22.1/misc/schema-change-stmt-note.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -The `{{ page.title }}` statement performs a schema change. For more information about how online schema changes work in CockroachDB, see [Online Schema Changes](online-schema-changes.html). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/misc/schema-change-view-job.md b/src/current/_includes/v22.1/misc/schema-change-view-job.md deleted file mode 100644 index 8861174d621..00000000000 --- a/src/current/_includes/v22.1/misc/schema-change-view-job.md +++ /dev/null @@ -1 +0,0 @@ -This schema change statement is registered as a job. You can view long-running jobs with [`SHOW JOBS`](show-jobs.html). diff --git a/src/current/_includes/v22.1/misc/session-vars.md b/src/current/_includes/v22.1/misc/session-vars.md deleted file mode 100644 index 4726ecdfa83..00000000000 --- a/src/current/_includes/v22.1/misc/session-vars.md +++ /dev/null @@ -1,82 +0,0 @@ -| Variable name | Description | Initial value | Modify with [`SET`](set-vars.html)? | View with [`SHOW`](show-vars.html)? | -|---|---|---|---|---| -| `application_name` | The current application name for statistics collection. | Empty string, or `cockroach` for sessions from the [built-in SQL client](cockroach-sql.html). | Yes | Yes | -| `bytea_output` | The [mode for conversions from `STRING` to `BYTES`](bytes.html#supported-conversions). | hex | Yes | Yes | -| `client_min_messages` | The severity level of notices displayed in the [SQL shell](cockroach-sql.html). Accepted values include `debug5`, `debug4`, `debug3`, `debug2`, `debug1`, `log`, `notice`, `warning`, and `error`. | `notice` | Yes | Yes | -| `crdb_version` | The version of CockroachDB. | CockroachDB OSS version | No | Yes | -| `database` | The [current database](sql-name-resolution.html#current-database). | Database in connection string, or empty if not specified. | Yes | Yes | -| `datestyle` | The input string format for [`DATE`](date.html) and [`TIMESTAMP`](timestamp.html) values. Accepted values include `ISO,MDY`, `ISO,DMY`, and `ISO,YMD`. | The value set by the `sql.defaults.datestyle` [cluster setting](cluster-settings.html) (`ISO,MDY`, by default). | Yes | Yes | -| `default_int_size` | The size, in bytes, of an [`INT`](int.html) type. | `8` | Yes | Yes | -| `default_transaction_isolation` | All transactions execute with `SERIALIZABLE` isolation. See [Transactions: Isolation levels](transactions.html#isolation-levels). | `SERIALIZABLE` | No | Yes | -| `default_transaction_priority` | The default transaction priority for the current session. The supported options are `low`, `normal`, and `high`. | `normal` | Yes | Yes | -| `default_transaction_quality_of_service` | **New in v22.1:** The default transaction quality of service for the current session. The supported options are `regular`, `critical`, and `background`. See [Set quality of service level](admission-control.html#set-quality-of-service-level-for-a-session). | `regular` | Yes | Yes | -| `default_transaction_read_only` | The default transaction access mode for the current session.
If set to `on`, only read operations are allowed in transactions in the current session; if set to `off`, both read and write operations are allowed. See [`SET TRANSACTION`](set-transaction.html) for more details. | `off` | Yes | Yes | -| `default_transaction_use_follower_reads` | If set to on, all read-only transactions use [`AS OF SYSTEM TIME follower_read_timestamp()`](as-of-system-time.html) to allow the transaction to use follower reads.
If set to `off`, read-only transactions will only use follower reads if an `AS OF SYSTEM TIME` clause is specified in the statement, with an interval of at least 4.8 seconds. | `off` | Yes | Yes | -| `disallow_full_table_scans` | If set to `on`, all queries that have planned a full table or full secondary index scan will return an error message. This setting does not apply to internal queries, which may plan full table or index scans without checking the session variable. | `off` | Yes | Yes | -| `distsql` | The query distribution mode for the session. By default, CockroachDB determines which queries are faster to execute if distributed across multiple nodes, and all other queries are run through the gateway node. | `auto` | Yes | Yes | -| `enable_implicit_select_for_update` | Indicates whether [`UPDATE`](update.html) and [`UPSERT`](upsert.html) statements acquire locks using the `FOR UPDATE` locking mode during their initial row scan, which improves performance for contended workloads.

For more information about how `FOR UPDATE` locking works, see the documentation for [`SELECT FOR UPDATE`](select-for-update.html). | `on` | Yes | Yes | -| `enable_insert_fast_path` | Indicates whether CockroachDB will use a specialized execution operator for inserting into a table. We recommend leaving this setting `on`. | `on` | Yes | Yes | -| `enable_implicit_transaction_for_batch_statements` | Indicates whether multiple statements in a single query (a "batch statement") will all run in the same implicit transaction, which matches the PostgreSQL wire protocol. | `off` | Yes | Yes | -| `enable_zigzag_join` | Indicates whether the [cost-based optimizer](cost-based-optimizer.html) will plan certain queries using a zig-zag merge join algorithm, which searches for the desired intersection by jumping back and forth between the indexes based on the fact that after constraining indexes, they share an ordering. | `on` | Yes | Yes | -| `extra_float_digits` | The number of digits displayed for floating-point values. Only values between `-15` and `3` are supported. | `0` | Yes | Yes | -| `force_savepoint_restart` | When set to `true`, allows the [`SAVEPOINT`](savepoint.html) statement to accept any name for a savepoint. | `off` | Yes | Yes | -| `foreign_key_cascades_limit` | Limits the number of [cascading operations](foreign-key.html#use-a-foreign-key-constraint-with-cascade) that run as part of a single query. | `10000` | Yes | Yes | -| `idle_in_session_timeout` | Automatically terminates sessions that idle past the specified threshold.

When set to `0`, the session will not timeout. | The value set by the `sql.defaults.idle_in_session_timeout` [cluster setting](cluster-settings.html) (`0s`, by default). | Yes | Yes | -| `idle_in_transaction_session_timeout` | Automatically terminates sessions that are idle in a transaction past the specified threshold.

When set to `0`, the session will not timeout. | The value set by the `sql.defaults.idle_in_transaction_session_timeout` [cluster setting](cluster-settings.html) (0s, by default). | Yes | Yes | -| `index_recommendations_enabled` | **New in v22.1:** If `true`, display recommendations to create indexes required to eliminate full table scans.
For more details, see [Default statement plans](explain.html#default-statement-plans). | `true` | Yes | Yes | -| `inject_retry_errors_enabled` | **New in v22.1:** If `true`, any statement executed inside of an explicit transaction (with the exception of [`SET`](set-vars.html) statements) will return a transaction retry error. If the client retries the transaction using the special [`cockroach_restart SAVEPOINT` name](savepoint.html#savepoints-for-client-side-transaction-retries), after the 3rd retry error, the transaction will proceed as normal. Otherwise, the errors will continue until `inject_retry_errors_enabled` is set to `false`. For more details, see [Testing transaction retry logic](transactions.html#testing-transaction-retry-logic). | `false` | Yes | Yes | -| `intervalstyle` | The input string format for [`INTERVAL`](interval.html) values. Accepted values include `postgres`, `iso_8601`, and `sql_standard`. | The value set by the `sql.defaults.intervalstyle` [cluster setting](cluster-settings.html) (`postgres`, by default). | Yes | Yes | -| `is_superuser` | If `on` or `true`, the current user is a member of the [`admin` role](security-reference/authorization.html#admin-role). | User-dependent | No | Yes | -| `large_full_scan_rows` | Determines which tables are considered "large" such that `disallow_full_table_scans` rejects full table or index scans of "large" tables. The default value is `1000`. To reject all full table or index scans, set to `0`. | User-dependent | No | Yes | -| `locality` | The location of the node.

For more information, see [Locality](cockroach-start.html#locality). | Node-dependent | No | Yes | -| `lock_timeout` | The amount of time a query can spend acquiring or waiting for a single [row-level lock](architecture/transaction-layer.html#concurrency-control).
In CockroachDB, unlike in PostgreSQL, non-locking reads wait for conflicting locks to be released. As a result, the `lock_timeout` configuration applies to writes, and to locking and non-locking reads in read-write and read-only transactions.
If `lock_timeout = 0`, queries do not timeout due to lock acquisitions. | The value set by the `sql.defaults.lock_timeout` [cluster setting](cluster-settings.html) (`0`, by default) | Yes | Yes | -| `node_id` | The ID of the node currently connected to.

This variable is particularly useful for verifying load balanced connections. | Node-dependent | No | Yes | -| `null_ordered_last` | **New in v22.1:** Set the default ordering of `NULL`s. The default order is `NULL`s first for ascending order and `NULL`s last for descending order. | `false` | Yes | Yes | -| `optimizer_use_histograms` | If `on`, the optimizer uses collected histograms for cardinality estimation. | `on` | No | Yes | -| `optimizer_use_multicol_stats` | If `on`, the optimizer uses collected multi-column statistics for cardinality estimation. | `on` | No | Yes | -| `prefer_lookup_joins_for_fks` | If `on`, the optimizer prefers [`lookup joins`](joins.html#lookup-joins) to [`merge joins`](joins.html#merge-joins) when performing [`foreign key`](foreign-key.html) checks. | `off` | Yes | Yes | -| `reorder_joins_limit` | Maximum number of joins that the optimizer will attempt to reorder when searching for an optimal query execution plan.

For more information, see [Join reordering](cost-based-optimizer.html#join-reordering). | `8` | Yes | Yes | -| `results_buffer_size` | The default size of the buffer that accumulates results for a statement or a batch of statements before they are sent to the client.
This can also be set for all connections using the `sql.defaults.results_buffer_size` [cluster setting](cluster-settings.html). Note that auto-retries generally only happen while no results have been delivered to the client, so reducing this size can increase the number of retryable errors a client receives. On the other hand, increasing the buffer size can increase the delay until the client receives the first result row. Setting to 0 disables any buffering. | `16384` | Yes | Yes | -| `require_explicit_primary_keys` | If `on`, CockroachDB throws on error for all tables created without an explicit primary key defined. | `off` | Yes | Yes | -| `search_path` | A list of schemas that will be searched to resolve unqualified table or function names.
For more details, see [SQL name resolution](sql-name-resolution.html). | `public` | Yes | Yes | -| `serial_normalization` | Specifies the default handling of [`SERIAL`](serial.html) in table definitions. Valid options include `'rowid'`, `'virtual_sequence'`, `sql_sequence`, `sql_sequence_cached`, and `unordered_rowid`.
If set to `'virtual_sequence'`, the `SERIAL` type auto-creates a sequence for [better compatibility with Hibernate sequences](https://forum.cockroachlabs.com/t/hibernate-sequence-generator-returns-negative-number-and-ignore-unique-rowid/1885).
If set to `sql_sequence_cached`, you can use the `sql.defaults.serial_sequences_cache_size` [cluster setting](cluster-settings.html) to control the number of values to cache in a user's session, with a default of 256.
If set to `unordered_rowid`, the `SERIAL` type generates a globally unique 64-bit integer (a combination of the insert timestamp and the ID of the node executing the statement) that does not have unique ordering. | `'rowid'` | Yes | Yes | -| `server_version` | The version of PostgreSQL that CockroachDB emulates. | Version-dependent | No | Yes | -| `server_version_num` | The version of PostgreSQL that CockroachDB emulates. | Version-dependent | Yes | Yes | -| `session_id` | The ID of the current session. | Session-dependent | No | Yes | -| `session_user` | The user connected for the current session. | User in connection string | No | Yes | -| `sql_safe_updates` | If `false`, potentially unsafe SQL statements are allowed, including `DROP` of a non-empty database and all dependent objects, [`DELETE`](delete.html) without a `WHERE` clause, [`UPDATE`](update.html) without a `WHERE` clause, and [`ALTER TABLE .. DROP COLUMN`](drop-column.html).
See Allow [Potentially Unsafe SQL Statements](cockroach-sql.html#allow-potentially-unsafe-sql-statements) for more details. | `true` for interactive sessions from the [built-in SQL client](cockroach-sql.html),
`false` for sessions from other clients | Yes | Yes | -| `statement_timeout` | The amount of time a statement can run before being stopped.
This value can be an `int` (e.g., `10`) and will be interpreted as milliseconds. It can also be an interval or string argument, where the string can be parsed as a valid interval (e.g., `'4s'`).
A value of `0` turns it off. | The value set by the `sql.defaults.statement_timeout` [cluster setting](cluster-settings.html) (`0s`, by default). | Yes | Yes | -| `stub_catalog_tables` | If `off`, querying an unimplemented, empty [`pg_catalog`](pg-catalog.html) table will result in an error, as is the case in v20.2 and earlier. If `on`, querying an unimplemented, empty `pg_catalog` table simply returns no rows. | `on` | Yes | Yes | -| `timezone` | The default time zone for the current session. This session variable was named `"time zone"` (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL. | `UTC` | Yes | Yes | -| `tracing` | The trace recording state. | `off` | | Yes | -| `transaction_isolation` | All transactions execute with `SERIALIZABLE` isolation. See [Transactions: Isolation levels](transactions.html#isolation-levels). This session variable was called `transaction isolation level` (with spaces) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL. | `SERIALIZABLE` | No | Yes | -| `transaction_priority` | The priority of the current transaction. See Transactions: Transaction priorities for more details. This session variable was called transaction priority (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL. | `NORMAL` | Yes | Yes | -| `transaction_read_only` | The access mode of the current transaction. See [`SET TRANSACTION`](set-transaction.html) for more details. | `off` | Yes | Yes | -| `transaction_rows_read_err` | The limit for the number of rows read by a SQL transaction. If this value is exceeded the transaction will fail (or the event will be logged to `SQL_INTERNAL_PERF` for internal transactions). | `0` | Yes | Yes | -| `transaction_rows_read_log` | The threshold for the number of rows read by a SQL transaction. If this value is exceeded, the event will be logged to `SQL_PERF` (or `SQL_INTERNAL_PERF` for internal transactions). | `0` | Yes | Yes | -| `transaction_rows_written_err` | The limit for the number of rows written by a SQL transaction. If this value is exceeded the transaction will fail (or the event will be logged to `SQL_INTERNAL_PERF` for internal transactions). | `0` | Yes | Yes | -| `transaction_rows_written_log` | The threshold for the number of rows written by a SQL transaction. If this value is exceeded, the event will be logged to `SQL_PERF` (or `SQL_INTERNAL_PERF` for internal transactions). | `0` | Yes | Yes | -| `transaction_status` | The state of the current transaction. See [Transactions](transactions.html) for more details. This session variable was called `transaction status` (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL. | `NoTxn` | No | Yes | -| `troubleshooting_mode_enabled` | When enabled, avoid performing additional work on queries, such as collecting and emitting telemetry data. This session variable is particularly useful when the cluster is experiencing issues, unavailability, or failure. | `off` | Yes | Yes | -| `use_declarative_schema_changer` | Whether to use the declarative schema changer for supported statements. See [Declarative schema changer](online-schema-changes.html#declarative-schema-changer) for more details. | `on` | Yes | Yes | -| `vectorize` | The vectorized execution engine mode. Options include `on` and `off`. For more details, see [Configure vectorized execution for CockroachDB](vectorized-execution.html#configure-vectorized-execution). | `on` | Yes | Yes | - -The following session variables are exposed only for backwards compatibility with earlier CockroachDB releases and have no impact on how CockroachDB runs: - -| Variable name | Initial value | Modify with [`SET`](set-vars.html)? | View with [`SHOW`](show-vars.html)? | -|---|---|---|---| -| `backslash_quote` | `safe_encoding` | No | Yes | -| `client_encoding` | `UTF8` | No | Yes | -| `default_tablespace` | | No | Yes | -| `enable_drop_enum_value` | `off` | Yes | Yes | -| `enable_seqscan` | `on` | Yes | Yes | -| `escape_string_warning` | `on` | No | Yes | -| `experimental_enable_hash_sharded_indexes` | `off` | Yes | Yes | -| `integer_datetimes` | `on` | No | Yes | -| `max_identifier_length` | `128` | No | Yes | -| `max_index_keys` | `32` | No | Yes | -| `row_security` | `off` | No | Yes | -| `standard_conforming_strings` | `on` | No | Yes | -| `server_encoding` | `UTF8` | Yes | Yes | -| `synchronize_seqscans` | `on` | No | Yes | -| `synchronous_commit` | `on` | Yes | Yes | diff --git a/src/current/_includes/v22.1/misc/set-enterprise-license.md b/src/current/_includes/v22.1/misc/set-enterprise-license.md deleted file mode 100644 index 55d71273c32..00000000000 --- a/src/current/_includes/v22.1/misc/set-enterprise-license.md +++ /dev/null @@ -1,16 +0,0 @@ -As the CockroachDB `root` user, open the [built-in SQL shell](cockroach-sql.html) in insecure or secure mode, as per your CockroachDB setup. In the following example, we assume that CockroachDB is running in insecure mode. Then use the [`SET CLUSTER SETTING`](set-cluster-setting.html) command to set the name of your organization and the license key: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING cluster.organization = 'Acme Company'; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING enterprise.license = 'xxxxxxxxxxxx'; -~~~ diff --git a/src/current/_includes/v22.1/misc/sorting-delete-output.md b/src/current/_includes/v22.1/misc/sorting-delete-output.md deleted file mode 100644 index a67c7cb3229..00000000000 --- a/src/current/_includes/v22.1/misc/sorting-delete-output.md +++ /dev/null @@ -1,9 +0,0 @@ -To sort the output of a `DELETE` statement, use: - -{% include_cached copy-clipboard.html %} -~~~ sql -> WITH a AS (DELETE ... RETURNING ...) - SELECT ... FROM a ORDER BY ... -~~~ - -For an example, see [Sort and return deleted rows](delete.html#sort-and-return-deleted-rows). diff --git a/src/current/_includes/v22.1/misc/source-privileges.md b/src/current/_includes/v22.1/misc/source-privileges.md deleted file mode 100644 index 135a153b83f..00000000000 --- a/src/current/_includes/v22.1/misc/source-privileges.md +++ /dev/null @@ -1,12 +0,0 @@ -The source file URL does _not_ require the [`admin` role](security-reference/authorization.html#admin-role) in the following scenarios: - -- S3 and GS using `SPECIFIED` (and not `IMPLICIT`) credentials. Azure is always `SPECIFIED` by default. -- [Userfile](use-userfile-for-bulk-operations.html) - -The source file URL _does_ require the [`admin` role](security-reference/authorization.html#admin-role) in the following scenarios: - -- S3 or GS using `IMPLICIT` credentials -- Use of a [custom endpoint](https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/) on S3 -- [Nodelocal](cockroach-nodelocal-upload.html), [HTTP](use-a-local-file-server-for-bulk-operations.html) or [HTTPS] (use-a-local-file-server-for-bulk-operations.html) - -We recommend using [cloud storage for bulk operations](use-cloud-storage-for-bulk-operations.html). diff --git a/src/current/_includes/v22.1/misc/storage-class-glacier-incremental.md b/src/current/_includes/v22.1/misc/storage-class-glacier-incremental.md deleted file mode 100644 index 92d1f6cf90d..00000000000 --- a/src/current/_includes/v22.1/misc/storage-class-glacier-incremental.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -[Incremental backups](take-full-and-incremental-backups.html#incremental-backups) are **not** compatible with the S3 Glacier Flexible Retrieval or Glacier Deep Archive storage classes. Incremental backups require ad-hoc reading of previous backups. The Glacier Flexible Retrieval or Glacier Deep Archive storage classes do not allow immediate access to S3 objects without first restoring the objects. See Amazon's documentation on [Restoring an archived object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html) for more detail. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v22.1/misc/storage-classes.md b/src/current/_includes/v22.1/misc/storage-classes.md deleted file mode 100644 index c4dafce941e..00000000000 --- a/src/current/_includes/v22.1/misc/storage-classes.md +++ /dev/null @@ -1 +0,0 @@ -Use the parameter to set one of these [storage classes](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#AmazonS3-PutObject-request-header-StorageClass) listed in Amazon's documentation. For more general usage information, see Amazon's [Using Amazon S3 storage classes](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html) documentation. diff --git a/src/current/_includes/v22.1/misc/table-storage-parameters.md b/src/current/_includes/v22.1/misc/table-storage-parameters.md deleted file mode 100644 index f4be17d72ce..00000000000 --- a/src/current/_includes/v22.1/misc/table-storage-parameters.md +++ /dev/null @@ -1,22 +0,0 @@ -| Parameter name | Description | Data type | Default value | -|------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------|---------------| -| `exclude_data_from_backup` | **New in v22.1:** Excludes the data in this table from any future backups. | Boolean | `false` | -| `sql_stats_automatic_collection_enabled` | Enable [automatic statistics collection](cost-based-optimizer.html#enable-and-disable-automatic-statistics-collection-for-tables) for this table. | Boolean | `true` | -| `sql_stats_automatic_collection_min_stale_rows` | Minimum number of stale rows in this table that will trigger a statistics refresh. | Integer | 500 | -| `sql_stats_automatic_collection_fraction_stale_rows` | Fraction of stale rows in this table that will trigger a statistics refresh. | Float | 0.2 | -| `ttl` | Signifies if a TTL is active. Automatically set and controls the reset of all TTL-related storage parameters. | N/A | N/A | -| `ttl_automatic_column` | If set, use the value of the `crdb_internal_expiration` hidden column. Always set to `true` and cannot be reset. | Boolean | `true` | -| `ttl_delete_batch_size` | The number of rows to [delete](delete.html) at a time. Minimum: `1`. | Integer | `100` | -| `ttl_delete_rate_limit` | The maximum number of rows to be deleted per second (rate limit). `0` means no limit. | Integer | `0` | -| `ttl_expire_after` | The [interval](interval.html) when a TTL will expire. This parameter is required to enable TTL. Minimum: `'1 microsecond'`.

Use `RESET (ttl)` to remove from the table. | Interval | N/A | -| `ttl_job_cron` | The frequency at which the TTL job runs. | [CRON syntax](https://cron.help) | `'@hourly'` | -| `ttl_label_metrics` | Whether or not [TTL metrics](row-level-ttl.html#ttl-metrics) are labelled by table name (at the risk of added cardinality). | Boolean | `false` | -| `ttl_pause` | If set, stops the TTL job from executing. | Boolean | `false` | -| `ttl_range_concurrency` | The Row-Level TTL queries split up scans by ranges, and this determines how many concurrent ranges are processed at a time. Minimum: `1`. | Integer | `1` | -| `ttl_row_stats_poll_interval` | If set, counts rows and expired rows on the table to report as Prometheus metrics while the TTL job is running. Unset by default, meaning no stats are fetched and reported. | Interval | N/A | -| `ttl_select_batch_size` | The number of rows to [select](select-clause.html) at one time during the row expiration check. Minimum: `1`. | Integer | `500` | - -The following parameters are included for PostgreSQL compatibility and do not affect how CockroachDB runs: - -- `autovacuum_enabled` -- `fillfactor` diff --git a/src/current/_includes/v22.1/misc/tooling.md b/src/current/_includes/v22.1/misc/tooling.md deleted file mode 100644 index 4dcb68f3941..00000000000 --- a/src/current/_includes/v22.1/misc/tooling.md +++ /dev/null @@ -1,90 +0,0 @@ -## Support levels - -Cockroach Labs has partnered with open-source projects, vendors, and individuals to offer the following levels of support with third-party tools: - -- **Full support** indicates that Cockroach Labs is committed to maintaining compatibility with the vast majority of the tool's features. CockroachDB is regularly tested against the latest version documented in the table below. -- **Partial support** indicates that Cockroach Labs is working towards full support for the tool. The primary features of the tool are compatible with CockroachDB (e.g., connecting and basic database operations), but full integration may require additional steps, lack support for all features, or exhibit unexpected behavior. -- **Partner supported** indicates that Cockroach Labs has a partnership with a third-party vendor that provides support for the CockroachDB integration with their tool. - -{{site.data.alerts.callout_info}} -Unless explicitly stated, support for a [driver](#drivers) or [data access framework](#data-access-frameworks-e-g-orms) does not include [automatic, client-side transaction retry handling](transactions.html#client-side-intervention). For client-side transaction retry handling samples, see [Example Apps](example-apps.html). -{{site.data.alerts.end}} - -If you encounter problems using CockroachDB with any of the tools listed on this page, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward better support. - -For a list of tools supported by the CockroachDB community, see [Third-Party Tools Supported by the Community](community-tooling.html). - -## Drivers - -| Language | Driver | Latest tested version | Support level | CockroachDB adapter | Tutorial | -|----------+--------+-----------------------+---------------------+---------------------+----------| -| C | [libpq](http://www.postgresql.org/docs/13/static/libpq.html)| PostgreSQL 13 | Beta | N/A | N/A | -| C# (.NET) | [Npgsql](https://www.nuget.org/packages/Npgsql/) | 7.0.2 | Full | N/A | [Build a C# App with CockroachDB (Npgsql)](build-a-csharp-app-with-cockroachdb.html) | -| Go | [pgx](https://github.com/jackc/pgx/releases)


[pq](https://github.com/lib/pq) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-22.1/pkg/cmd/roachtest/tests/pgx.go ||var supportedPGXTag = "||"\n\n %}
(use latest version of CockroachDB adapter)
{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-22.1/pkg/cmd/roachtest/tests/libpq.go ||var libPQSupportedTag = "||"\n\n %} | Full


Full | [`crdbpgx`](https://pkg.go.dev/github.com/cockroachdb/cockroach-go/crdb/crdbpgx)
(includes client-side transaction retry handling)
N/A | [Build a Go App with CockroachDB (pgx)](build-a-go-app-with-cockroachdb.html)


[Build a Go App with CockroachDB (pq)](build-a-go-app-with-cockroachdb-pq.html) | -| Java | [JDBC](https://jdbc.postgresql.org/download/) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-22.1/pkg/cmd/roachtest/tests/pgjdbc.go ||var supportedPGJDBCTag = "||"\n\n %} | Full | N/A | [Build a Java App with CockroachDB (JDBC)](build-a-java-app-with-cockroachdb.html) | -| JavaScript | [pg](https://www.npmjs.com/package/pg) | 8.2.1 | Full | N/A | [Build a Node.js App with CockroachDB (pg)](build-a-nodejs-app-with-cockroachdb.html) | -| Python | [psycopg3](https://www.psycopg.org/psycopg3/docs/)


[psycopg2](https://www.psycopg.org/docs/install.html) | 3.0.16


2.8.6 | Full


Full | N/A


N/A | [Build a Python App with CockroachDB (psycopg3)](build-a-python-app-with-cockroachdb-psycopg3.html)


[Build a Python App with CockroachDB (psycopg2)](build-a-python-app-with-cockroachdb.html) | -| Ruby | [pg](https://rubygems.org/gems/pg) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-22.1/pkg/cmd/roachtest/tests/ruby_pg.go ||var rubyPGVersion = "||"\n\n %} | Full | N/A | [Build a Ruby App with CockroachDB (pg)](build-a-ruby-app-with-cockroachdb.html) | -| Rust | [rust-postgres](https://github.com/sfackler/rust-postgres) | 0.19.2 | Beta | N/A | [Build a Rust App with CockroachDB](build-a-rust-app-with-cockroachdb.html) | - -## Data access frameworks (e.g., ORMs) - -| Language | Framework | Latest tested version | Support level | CockroachDB adapter | Tutorial | -|----------+-----------+-----------------------+---------------+---------------------+----------| -| Go | [GORM](https://github.com/jinzhu/gorm/releases)


[go-pg](https://github.com/go-pg/pg)
[upper/db](https://github.com/upper/db) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-22.1/pkg/cmd/roachtest/tests/gorm.go ||var gormSupportedTag = "||"\n\n %}


{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-22.1/pkg/cmd/roachtest/tests/gopg.go ||var gopgSupportedTag = "||"\n\n %}
v4 | Full


Full
Full | [`crdbgorm`](https://pkg.go.dev/github.com/cockroachdb/cockroach-go/crdb/crdbgorm)
(includes client-side transaction retry handling)
N/A
N/A | [Build a Go App with CockroachDB (GORM)](build-a-go-app-with-cockroachdb-gorm.html)


N/A
[Build a Go App with CockroachDB (upper/db)](build-a-go-app-with-cockroachdb-upperdb.html) | -| Java | [Hibernate](https://hibernate.org/orm/)
(including [Hibernate Spatial](https://docs.jboss.org/hibernate/orm/current/userguide/html_single/Hibernate_User_Guide.html#spatial))
[jOOQ](https://www.jooq.org/)
[MyBatis](https://mybatis.org/mybatis-3/) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-22.1/pkg/cmd/roachtest/tests/hibernate.go ||var supportedHibernateTag = "||"\n\n %} (must be at least 5.4.19)


3.13.2 (must be at least 3.13.0)
3.5.5| Full


Full
Full | N/A


N/A
N/A | [Build a Java App with CockroachDB (Hibernate)](build-a-java-app-with-cockroachdb-hibernate.html)


[Build a Java App with CockroachDB (jOOQ)](build-a-java-app-with-cockroachdb-jooq.html)
[Build a Spring App with CockroachDB (MyBatis)](build-a-spring-app-with-cockroachdb-mybatis.html) | -| JavaScript/TypeScript | [Sequelize](https://www.npmjs.com/package/sequelize)


[Knex.js](https://knexjs.org/)
[Prisma](https://prisma.io)
[TypeORM](https://www.npmjs.com/package/typeorm) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-22.1/pkg/cmd/roachtest/tests/sequelize.go ||var supportedSequelizeCockroachDBRelease = "||"\n\n %}
(use latest version of CockroachDB adapter)
{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-22.1/pkg/cmd/roachtest/tests/knex.go ||const supportedKnexTag = "||"\n\n %}
3.14.0
0.3.17 {% comment %}{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/master/pkg/cmd/roachtest/tests/typeorm.go ||const supportedTypeORMRelease = "||"\n %}{% endcomment %} | Full


Full
Full
Full | [`sequelize-cockroachdb`](https://www.npmjs.com/package/sequelize-cockroachdb)


N/A
N/A
N/A | [Build a Node.js App with CockroachDB (Sequelize)](build-a-nodejs-app-with-cockroachdb-sequelize.html)


[Build a Node.js App with CockroachDB (Knex.js)](build-a-nodejs-app-with-cockroachdb-knexjs.html)
[Build a Node.js App with CockroachDB (Prisma)](build-a-nodejs-app-with-cockroachdb-prisma.html)
[Build a TypeScript App with CockroachDB (TypeORM)](build-a-typescript-app-with-cockroachdb.html) | -| Ruby | [ActiveRecord](https://rubygems.org/gems/activerecord)
[RGeo/RGeo-ActiveRecord](https://github.com/cockroachdb/activerecord-cockroachdb-adapter#working-with-spatial-data) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-22.1/pkg/cmd/roachtest/tests/activerecord.go ||var supportedRailsVersion = "||"\nvar %}
(use latest version of CockroachDB adapter) | Full | [`activerecord-cockroachdb-adapter`](https://rubygems.org/gems/activerecord-cockroachdb-adapter)
(includes client-side transaction retry handling) | [Build a Ruby App with CockroachDB (ActiveRecord)](build-a-ruby-app-with-cockroachdb-activerecord.html) | -| Python | [Django](https://pypi.org/project/Django/)
(including [GeoDjango](https://docs.djangoproject.com/en/3.1/ref/contrib/gis/))
[peewee](https://github.com/coleifer/peewee/)
[SQLAlchemy](https://www.sqlalchemy.org/) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-22.1/pkg/cmd/roachtest/tests/django.go ||var djangoSupportedTag = "cockroach-||"\nvar %}
(use latest version of CockroachDB adapter)

3.13.3
0.7.13
1.4.17
(use latest version of CockroachDB adapter) | Full


Full
Full
Full | [`django-cockroachdb`](https://pypi.org/project/django-cockroachdb/)


N/A
N/A
[`sqlalchemy-cockroachdb`](https://pypi.org/project/sqlalchemy-cockroachdb)
(includes client-side transaction retry handling) | [Build a Python App with CockroachDB (Django)](build-a-python-app-with-cockroachdb-django.html)


N/A (See [peewee docs](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#cockroach-database).)
[Build a Python App with CockroachDB (SQLAlchemy)](build-a-python-app-with-cockroachdb-sqlalchemy.html) | - -## Application frameworks - -| Framework | Data access | Latest tested version | Support level | Tutorial | -|-----------+-------------+-----------------------+---------------+----------| -| Spring | [JDBC](build-a-spring-app-with-cockroachdb-jdbc.html)
[JPA (Hibernate)](build-a-spring-app-with-cockroachdb-jpa.html)
[MyBatis](build-a-spring-app-with-cockroachdb-mybatis.html) | See individual Java ORM or [driver](#drivers) for data access version support. | See individual Java ORM or [driver](#drivers) for data access support level. | [Build a Spring App with CockroachDB (JDBC)](build-a-spring-app-with-cockroachdb-jdbc.html)
[Build a Spring App with CockroachDB (JPA)](build-a-spring-app-with-cockroachdb-jpa.html)
[Build a Spring App with CockroachDB (MyBatis)](build-a-spring-app-with-cockroachdb-mybatis.html) - -## Graphical user interfaces (GUIs) - -| GUI | Latest tested version | Support level | Tutorial | -|-----+-----------------------+---------------+----------| -| [DBeaver](https://dbeaver.com/) | 5.2.3 | Full | [Visualize CockroachDB Schemas with DBeaver](dbeaver.html) - -## Integrated development environments (IDEs) - -| IDE | Latest tested version | Support level | Tutorial | -|-----+-----------------------+---------------+----------| -| [DataGrip](https://www.jetbrains.com/datagrip/) | 2021.1 | Full | N/A -| [IntelliJ IDEA](https://www.jetbrains.com/idea/) | 2021.1 | Full | [Use IntelliJ IDEA with CockroachDB](intellij-idea.html) - -## Enhanced data security tools - -| Tool | Support level | Integration | -|-----+---------------+----------| -| [Satori](https://satoricyber.com/) | Partner supported | [Satori Integration](satori-integration.html) | -| [HashiCorp Vault](https://www.vaultproject.io/) | Partner supported | [HashiCorp Vault Integration](hashicorp-integration.html) | - -## Schema migration tools - -| Tool | Latest tested version | Support level | Tutorial | -|-----+------------------------+----------------+----------| -| [Alembic](https://alembic.sqlalchemy.org/en/latest/) | 1.7 | Full | [Migrate CockroachDB Schemas with Alembic](alembic.html) -| [Flyway](https://flywaydb.org/documentation/commandline/#download-and-installation) | 7.1.0 | Full | [Migrate CockroachDB Schemas with Flyway](flyway.html) -| [Liquibase](https://www.liquibase.org/download) | 4.2.0 | Full | [Migrate CockroachDB Schemas with Liquibase](liquibase.html) -| [Prisma](https://prisma.io) | 3.14.0 | Full | [Build a Node.js App with CockroachDB (Prisma)](build-a-nodejs-app-with-cockroachdb-prisma.html) - -## Data migration tools - -| Tool | Latest tested version | Support level | Tutorial | -|-----+------------------------+----------------+----------| -| [AWS DMS](https://aws.amazon.com/dms/) | 3.4.6 | Beta | [Migrate your database to CockroachDB with AWS DMS](aws-dms.html) - -## Provisioning tools -| Tool | Latest tested version | Support level | Documentation | -|------+-----------------------+---------------+---------------| -| [Terraform](https://terraform.io/) | 1.3.2 | Beta | [Terraform provider for CockroachDB Cloud](https://github.com/cockroachdb/terraform-provider-cockroach#get-started) | - -## Other tools - -| Tool | Latest tested version | Support level | Tutorial | -|-----+------------------------+---------------+----------| -| [Flowable](https://github.com/flowable/flowable-engine) | 6.4.2 | Full | [Getting Started with Flowable and CockroachDB (external)](https://blog.flowable.org/2019/07/11/getting-started-with-flowable-and-cockroachdb/) diff --git a/src/current/_includes/v22.1/misc/userfile.md b/src/current/_includes/v22.1/misc/userfile.md deleted file mode 100644 index 1a23d5d2c39..00000000000 --- a/src/current/_includes/v22.1/misc/userfile.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} - CockroachDB now supports uploading files to a [user-scoped file storage](use-userfile-for-bulk-operations.html) using a SQL connection. We recommend using `userfile` instead of `nodelocal`, as it is user-scoped and more secure. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/orchestration/apply-custom-resource.md b/src/current/_includes/v22.1/orchestration/apply-custom-resource.md deleted file mode 100644 index e7aacf41a1e..00000000000 --- a/src/current/_includes/v22.1/orchestration/apply-custom-resource.md +++ /dev/null @@ -1,6 +0,0 @@ -Apply the new settings to the cluster: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ kubectl apply -f example.yaml -~~~ \ No newline at end of file diff --git a/src/current/_includes/v22.1/orchestration/apply-helm-values.md b/src/current/_includes/v22.1/orchestration/apply-helm-values.md deleted file mode 100644 index 90f9c8783f8..00000000000 --- a/src/current/_includes/v22.1/orchestration/apply-helm-values.md +++ /dev/null @@ -1,6 +0,0 @@ -Apply the custom values to override the default Helm chart [values](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/values.yaml): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ helm upgrade {release-name} --values {custom-values}.yaml cockroachdb/cockroachdb -~~~ \ No newline at end of file diff --git a/src/current/_includes/v22.1/orchestration/apply-statefulset-manifest.md b/src/current/_includes/v22.1/orchestration/apply-statefulset-manifest.md deleted file mode 100644 index 0236903c497..00000000000 --- a/src/current/_includes/v22.1/orchestration/apply-statefulset-manifest.md +++ /dev/null @@ -1,6 +0,0 @@ -Apply the new settings to the cluster: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ kubectl apply -f {statefulset-manifest}.yaml -~~~ \ No newline at end of file diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-basic-sql.md b/src/current/_includes/v22.1/orchestration/kubernetes-basic-sql.md deleted file mode 100644 index f7cfbd76641..00000000000 --- a/src/current/_includes/v22.1/orchestration/kubernetes-basic-sql.md +++ /dev/null @@ -1,44 +0,0 @@ -1. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE bank; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO bank.accounts VALUES (1, 1000.50); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT * FROM bank.accounts; - ~~~ - - ~~~ - id | balance - +----+---------+ - 1 | 1000.50 - (1 row) - ~~~ - -1. [Create a user with a password](create-user.html#create-a-user-with-a-password): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER roach WITH PASSWORD 'Q7gc8rEdS'; - ~~~ - - You will need this username and password to access the DB Console later. - -1. Exit the SQL shell and pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ \ No newline at end of file diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-cockroach-cert.md b/src/current/_includes/v22.1/orchestration/kubernetes-cockroach-cert.md deleted file mode 100644 index ff44cf183a4..00000000000 --- a/src/current/_includes/v22.1/orchestration/kubernetes-cockroach-cert.md +++ /dev/null @@ -1,90 +0,0 @@ -{{site.data.alerts.callout_info}} -The below steps use [`cockroach cert` commands](cockroach-cert.html) to quickly generate and sign the CockroachDB node and client certificates. Read our [Authentication](authentication.html#using-digital-certificates-with-cockroachdb) docs to learn about other methods of signing certificates. -{{site.data.alerts.end}} - -1. Create two directories: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir certs my-safe-directory - ~~~ - - Directory | Description - ----------|------------ - `certs` | You'll generate your CA certificate and all node and client certificates and keys in this directory. - `my-safe-directory` | You'll generate your CA key in this directory and then reference the key when generating node and client certificates. - -1. Create the CA certificate and key pair: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-ca \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -1. Create a client certificate and key pair for the root user: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-client \ - root \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -1. Upload the client certificate and key to the Kubernetes cluster as a secret: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create secret \ - generic cockroachdb.client.root \ - --from-file=certs - ~~~ - - ~~~ - secret/cockroachdb.client.root created - ~~~ - -1. Create the certificate and key pair for your CockroachDB nodes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - localhost 127.0.0.1 \ - cockroachdb-public \ - cockroachdb-public.default \ - cockroachdb-public.default.svc.cluster.local \ - *.cockroachdb \ - *.cockroachdb.default \ - *.cockroachdb.default.svc.cluster.local \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -1. Upload the node certificate and key to the Kubernetes cluster as a secret: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create secret \ - generic cockroachdb.node \ - --from-file=certs - ~~~ - - ~~~ - secret/cockroachdb.node created - ~~~ - -1. Check that the secrets were created on the cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get secrets - ~~~ - - ~~~ - NAME TYPE DATA AGE - cockroachdb.client.root Opaque 3 41m - cockroachdb.node Opaque 5 14s - default-token-6qjdb kubernetes.io/service-account-token 3 4m - ~~~ \ No newline at end of file diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-expand-disk-helm.md b/src/current/_includes/v22.1/orchestration/kubernetes-expand-disk-helm.md deleted file mode 100644 index 4ec3d2f171f..00000000000 --- a/src/current/_includes/v22.1/orchestration/kubernetes-expand-disk-helm.md +++ /dev/null @@ -1,118 +0,0 @@ -You can expand certain [types of persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes -) (including GCE Persistent Disk and Amazon Elastic Block Store) by editing their persistent volume claims. - -{{site.data.alerts.callout_info}} -These steps assume you followed the tutorial [Deploy CockroachDB on Kubernetes](deploy-cockroachdb-with-kubernetes.html?filters=helm). -{{site.data.alerts.end}} - -1. Get the persistent volume claims for the volumes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-my-release-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-my-release-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - ~~~ - -1. In order to expand a persistent volume claim, `AllowVolumeExpansion` in its storage class must be `true`. Examine the storage class: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe storageclass standard - ~~~ - - ~~~ - Name: standard - IsDefaultClass: Yes - Annotations: storageclass.kubernetes.io/is-default-class=true - Provisioner: kubernetes.io/gce-pd - Parameters: type=pd-standard - AllowVolumeExpansion: False - MountOptions: - ReclaimPolicy: Delete - VolumeBindingMode: Immediate - Events: - ~~~ - - If necessary, edit the storage class: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch storageclass standard -p '{"allowVolumeExpansion": true}' - ~~~ - - ~~~ - storageclass.storage.k8s.io/standard patched - ~~~ - -1. Edit one of the persistent volume claims to request more space: - - {{site.data.alerts.callout_info}} - The requested `storage` value must be larger than the previous value. You cannot use this method to decrease the disk size. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch pvc datadir-my-release-cockroachdb-0 -p '{"spec": {"resources": {"requests": {"storage": "200Gi"}}}}' - ~~~ - - ~~~ - persistentvolumeclaim/datadir-my-release-cockroachdb-0 patched - ~~~ - -1. Check the capacity of the persistent volume claim: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc datadir-my-release-cockroachdb-0 - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 18m - ~~~ - - If the PVC capacity has not changed, this may be because `AllowVolumeExpansion` was initially set to `false` or because the [volume has a file system](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim) that has to be expanded. You will need to start or restart a pod in order to have it reflect the new capacity. - - {{site.data.alerts.callout_success}} - Running `kubectl get pv` will display the persistent volumes with their *requested* capacity and not their actual capacity. This can be misleading, so it's best to use `kubectl get pvc`. - {{site.data.alerts.end}} - -1. Examine the persistent volume claim. If the volume has a file system, you will see a `FileSystemResizePending` condition with an accompanying message: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe pvc datadir-my-release-cockroachdb-0 - ~~~ - - ~~~ - Waiting for user to (re-)start a pod to finish file system resize of volume on node. - ~~~ - -1. Delete the corresponding pod to restart it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod my-release-cockroachdb-0 - ~~~ - - The `FileSystemResizePending` condition and message will be removed. - -1. View the updated persistent volume claim: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc datadir-my-release-cockroachdb-0 - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 200Gi RWO standard 20m - ~~~ - -1. The CockroachDB cluster needs to be expanded one node at a time. Repeat steps 3 - 6 to increase the capacities of the remaining volumes by the same amount. \ No newline at end of file diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-expand-disk-manual.md b/src/current/_includes/v22.1/orchestration/kubernetes-expand-disk-manual.md deleted file mode 100644 index e6cf4bbbddb..00000000000 --- a/src/current/_includes/v22.1/orchestration/kubernetes-expand-disk-manual.md +++ /dev/null @@ -1,118 +0,0 @@ -You can expand certain [types of persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes -) (including GCE Persistent Disk and Amazon Elastic Block Store) by editing their persistent volume claims. - -{{site.data.alerts.callout_info}} -These steps assume you followed the tutorial [Deploy CockroachDB on Kubernetes](deploy-cockroachdb-with-kubernetes.html?filters=manual). -{{site.data.alerts.end}} - -1. Get the persistent volume claims for the volumes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - ~~~ - -1. In order to expand a persistent volume claim, `AllowVolumeExpansion` in its storage class must be `true`. Examine the storage class: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe storageclass standard - ~~~ - - ~~~ - Name: standard - IsDefaultClass: Yes - Annotations: storageclass.kubernetes.io/is-default-class=true - Provisioner: kubernetes.io/gce-pd - Parameters: type=pd-standard - AllowVolumeExpansion: False - MountOptions: - ReclaimPolicy: Delete - VolumeBindingMode: Immediate - Events: - ~~~ - - If necessary, edit the storage class: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch storageclass standard -p '{"allowVolumeExpansion": true}' - ~~~ - - ~~~ - storageclass.storage.k8s.io/standard patched - ~~~ - -1. Edit one of the persistent volume claims to request more space: - - {{site.data.alerts.callout_info}} - The requested `storage` value must be larger than the previous value. You cannot use this method to decrease the disk size. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch pvc datadir-cockroachdb-0 -p '{"spec": {"resources": {"requests": {"storage": "200Gi"}}}}' - ~~~ - - ~~~ - persistentvolumeclaim/datadir-cockroachdb-0 patched - ~~~ - -1. Check the capacity of the persistent volume claim: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc datadir-cockroachdb-0 - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 18m - ~~~ - - If the PVC capacity has not changed, this may be because `AllowVolumeExpansion` was initially set to `false` or because the [volume has a file system](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim) that has to be expanded. You will need to start or restart a pod in order to have it reflect the new capacity. - - {{site.data.alerts.callout_success}} - Running `kubectl get pv` will display the persistent volumes with their *requested* capacity and not their actual capacity. This can be misleading, so it's best to use `kubectl get pvc`. - {{site.data.alerts.end}} - -1. Examine the persistent volume claim. If the volume has a file system, you will see a `FileSystemResizePending` condition with an accompanying message: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe pvc datadir-cockroachdb-0 - ~~~ - - ~~~ - Waiting for user to (re-)start a pod to finish file system resize of volume on node. - ~~~ - -1. Delete the corresponding pod to restart it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod cockroachdb-0 - ~~~ - - The `FileSystemResizePending` condition and message will be removed. - -1. View the updated persistent volume claim: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc datadir-cockroachdb-0 - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 200Gi RWO standard 20m - ~~~ - -1. The CockroachDB cluster needs to be expanded one node at a time. Repeat steps 3 - 6 to increase the capacities of the remaining volumes by the same amount. \ No newline at end of file diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-limitations.md b/src/current/_includes/v22.1/orchestration/kubernetes-limitations.md deleted file mode 100644 index b2a3db884c9..00000000000 --- a/src/current/_includes/v22.1/orchestration/kubernetes-limitations.md +++ /dev/null @@ -1,37 +0,0 @@ -#### Kubernetes version - -To deploy CockroachDB {{page.version.version}}, Kubernetes 1.18 or higher is required. Cockroach Labs strongly recommends that you use a Kubernetes version that is [eligible for patch support by the Kubernetes project](https://kubernetes.io/releases/). - -#### Kubernetes Operator - -- The CockroachDB Kubernetes Operator currently deploys clusters in a single region. For multi-region deployments using manual configs, see [Orchestrate CockroachDB Across Multiple Kubernetes Clusters]({% link {{ page.version.version }}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md %}). - -- Using the Operator, you can give a new cluster an arbitrary number of [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/). However, a cluster's labels cannot be modified after it is deployed. To track the status of this limitation, refer to [#993](https://github.com/cockroachdb/cockroach-operator/issues/993) in the Operator project's issue tracker. - -#### Helm version - -The CockroachDB Helm chart requires Helm 3.0 or higher. If you attempt to use an incompatible Helm version, an error like the following occurs: - -~~~ shell -Error: UPGRADE FAILED: template: cockroachdb/templates/tests/client.yaml:6:14: executing "cockroachdb/templates/tests/client.yaml" at <.Values.networkPolicy.enabled>: nil pointer evaluating interface {}.enabled -~~~ - -The CockroachDB Helm chart is compatible with Kubernetes versions 1.22 and earlier. - -The CockroachDB Helm chart is currently not under active development, and no new features are planned. However, Cockroach Labs remains committed to fully supporting the Helm chart by addressing defects, providing security patches, and addressing breaking changes due to deprecations in Kubernetes APIs. - -A deprecation notice for the Helm chart will be provided to customers a minimum of 6 months in advance of actual deprecation. - -#### Network - -Service Name Indication (SNI) is an extension to the TLS protocol which allows a client to indicate which hostname it is attempting to connect to at the start of the TCP handshake process. The server can present multiple certificates on the same IP address and TCP port number, and one server can serve multiple secure websites or API services even if they use different certificates. - -Due to its order of operations, the PostgreSQL wire protocol's implementation of TLS is not compatible with SNI-based routing in the Kubernetes ingress controller. Instead, use a TCP load balancer for CockroachDB that is not shared with other services. - -#### Resources - -When starting Kubernetes, select machines with at least **4 vCPUs** and **16 GiB** of memory, and provision at least **2 vCPUs** and **8 Gi** of memory to CockroachDB per pod. These minimum settings are used by default in this deployment guide, and are appropriate for testing purposes only. On a production deployment, you should adjust the resource settings for your workload. For details, see [Resource management](configure-cockroachdb-kubernetes.html#memory-and-cpu). - -#### Storage - -At this time, orchestrations of CockroachDB with Kubernetes use external persistent volumes that are often replicated by the provider. Because CockroachDB already replicates data automatically, this additional layer of replication is unnecessary and can negatively impact performance. High-performance use cases on a private Kubernetes cluster may want to consider using [local volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local). diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-remove-nodes-helm.md b/src/current/_includes/v22.1/orchestration/kubernetes-remove-nodes-helm.md deleted file mode 100644 index cbb34893aad..00000000000 --- a/src/current/_includes/v22.1/orchestration/kubernetes-remove-nodes-helm.md +++ /dev/null @@ -1,126 +0,0 @@ -Before removing a node from your cluster, you must first decommission the node. This lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node. - -{{site.data.alerts.callout_danger}} -If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Prepare for graceful shutdown](node-shutdown.html?filters=decommission#prepare-for-graceful-shutdown). -{{site.data.alerts.end}} - -1. Use the [`cockroach node status`](cockroach-node.html) command to get the internal IDs of nodes. For example, if you followed the steps in [Deploy CockroachDB with Kubernetes](deploy-cockroachdb-with-kubernetes.html#step-3-use-the-built-in-sql-client) to launch a secure client pod, get a shell into the `cockroachdb-client-secure` pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach node status \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - - ~~~ - id | address | build | started_at | updated_at | is_available | is_live - +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+ - 1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true - 2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true - 3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true - 4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true - (4 rows) - ~~~ - - The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required. - -1. Use the [`cockroach node decommission`](cockroach-node.html) command to decommission the node with the highest number in its address, specifying its ID (in this example, node ID `4` because its address is `my-release-cockroachdb-3`): - - {{site.data.alerts.callout_info}} - You must decommission the node with the highest number in its address. Kubernetes will remove the pod for the node with the highest number in its address when you reduce the replica count. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach node decommission 4 \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - - You'll then see the decommissioning status print to `stderr` as it changes: - - ~~~ - id | is_live | replicas | is_decommissioning | membership | is_draining - -----+---------+----------+--------------------+-----------------+-------------- - 4 | true | 73 | true | decommissioning | false - ~~~ - - Once the node has been fully decommissioned, you'll see a confirmation: - - ~~~ - id | is_live | replicas | is_decommissioning | membership | is_draining - -----+---------+----------+--------------------+-----------------+-------------- - 4 | true | 0 | true | decommissioning | false - (1 row) - - No more data reported on target nodes. Please verify cluster health before removing the nodes. - ~~~ - -1. Once the node has been decommissioned, scale down your StatefulSet: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set statefulset.replicas=3 \ - --reuse-values - ~~~ - -1. Verify that the pod was successfully removed: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 51m - my-release-cockroachdb-1 1/1 Running 0 47m - my-release-cockroachdb-2 1/1 Running 0 3m - cockroachdb-client-secure 1/1 Running 0 15m - ... - ~~~ - -1. You should also remove the persistent volume that was mounted to the pod. Get the persistent volume claims for the volumes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-my-release-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-my-release-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-my-release-cockroachdb-3 Bound pvc-75e561ba-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - ~~~ - -1. Verify that the PVC with the highest number in its name is no longer mounted to a pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe pvc datadir-my-release-cockroachdb-3 - ~~~ - - ~~~ - Name: datadir-my-release-cockroachdb-3 - ... - Mounted By: - ~~~ - -1. Remove the persistent volume by deleting the PVC: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pvc datadir-my-release-cockroachdb-3 - ~~~ - - ~~~ - persistentvolumeclaim "datadir-my-release-cockroachdb-3" deleted - ~~~ \ No newline at end of file diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-remove-nodes-insecure.md b/src/current/_includes/v22.1/orchestration/kubernetes-remove-nodes-insecure.md deleted file mode 100644 index 872aa0859f4..00000000000 --- a/src/current/_includes/v22.1/orchestration/kubernetes-remove-nodes-insecure.md +++ /dev/null @@ -1,140 +0,0 @@ -To safely remove a node from your cluster, you must first decommission the node and only then adjust the `spec.replicas` value of your StatefulSet configuration to permanently remove it. This sequence is important because the decommissioning process lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node. - -{{site.data.alerts.callout_danger}} -If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Prepare for graceful shutdown](node-shutdown.html?filters=decommission#prepare-for-graceful-shutdown). -{{site.data.alerts.end}} - -1. Launch a temporary interactive pod and use the `cockroach node status` command to get the internal IDs of nodes: - -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- node status \ - --insecure \ - --host=cockroachdb-public - ~~~ - - ~~~ - id | address | build | started_at | updated_at | is_available | is_live - +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+ - 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true - 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true - 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true - 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true - (4 rows) - ~~~ - -
- -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- node status \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ - - ~~~ - id | address | build | started_at | updated_at | is_available | is_live - +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+ - 1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true - 2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true - 3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true - 4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true - (4 rows) - ~~~ - -
- -2. Note the ID of the node with the highest number in its address (in this case, the address including `cockroachdb-3`) and use the [`cockroach node decommission`](cockroach-node.html) command to decommission it: - - {{site.data.alerts.callout_info}} - It's important to decommission the node with the highest number in its address because, when you reduce the replica count, Kubernetes will remove the pod for that node. - {{site.data.alerts.end}} - -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- node decommission \ - --insecure \ - --host=cockroachdb-public - ~~~ - -
- -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- node decommission \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ - -
- - You'll then see the decommissioning status print to `stderr` as it changes: - - ~~~ - id | is_live | replicas | is_decommissioning | membership | is_draining - -----+---------+----------+--------------------+-----------------+-------------- - 4 | true | 73 | true | decommissioning | false - ~~~ - - Once the node has been fully decommissioned, you'll see a confirmation: - - ~~~ - id | is_live | replicas | is_decommissioning | membership | is_draining - -----+---------+----------+--------------------+-----------------+-------------- - 4 | true | 0 | true | decommissioning | false - (1 row) - - No more data reported on target nodes. Please verify cluster health before removing the nodes. - ~~~ - -3. Once the node has been decommissioned, remove a pod from your StatefulSet: - -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl scale statefulset cockroachdb --replicas=3 - ~~~ - - ~~~ - statefulset "cockroachdb" scaled - ~~~ - -
- -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set statefulset.replicas=3 \ - --reuse-values - ~~~ - -
diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-remove-nodes-manual.md b/src/current/_includes/v22.1/orchestration/kubernetes-remove-nodes-manual.md deleted file mode 100644 index c8cc789567b..00000000000 --- a/src/current/_includes/v22.1/orchestration/kubernetes-remove-nodes-manual.md +++ /dev/null @@ -1,126 +0,0 @@ -Before removing a node from your cluster, you must first decommission the node. This lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node. - -{{site.data.alerts.callout_danger}} -If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Prepare for graceful shutdown](node-shutdown.html?filters=decommission#prepare-for-graceful-shutdown). -{{site.data.alerts.end}} - -1. Use the [`cockroach node status`](cockroach-node.html) command to get the internal IDs of nodes. For example, if you followed the steps in [Deploy CockroachDB with Kubernetes](deploy-cockroachdb-with-kubernetes.html#step-3-use-the-built-in-sql-client) to launch a secure client pod, get a shell into the `cockroachdb-client-secure` pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach node status \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - ~~~ - id | address | build | started_at | updated_at | is_available | is_live - +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+ - 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true - 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true - 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true - 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true - (4 rows) - ~~~ - - The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required. - -1. Use the [`cockroach node decommission`](cockroach-node.html) command to decommission the node with the highest number in its address, specifying its ID (in this example, node ID `4` because its address is `cockroachdb-3`): - - {{site.data.alerts.callout_info}} - You must decommission the node with the highest number in its address. Kubernetes will remove the pod for the node with the highest number in its address when you reduce the replica count. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach node decommission 4 \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - You'll then see the decommissioning status print to `stderr` as it changes: - - ~~~ - id | is_live | replicas | is_decommissioning | membership | is_draining - -----+---------+----------+--------------------+-----------------+-------------- - 4 | true | 73 | true | decommissioning | false - ~~~ - - Once the node has been fully decommissioned, you'll see a confirmation: - - ~~~ - id | is_live | replicas | is_decommissioning | membership | is_draining - -----+---------+----------+--------------------+-----------------+-------------- - 4 | true | 0 | true | decommissioning | false - (1 row) - - No more data reported on target nodes. Please verify cluster health before removing the nodes. - ~~~ - -1. Once the node has been decommissioned, scale down your StatefulSet: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl scale statefulset cockroachdb --replicas=3 - ~~~ - - ~~~ - statefulset.apps/cockroachdb scaled - ~~~ - -1. Verify that the pod was successfully removed: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 1/1 Running 0 51m - cockroachdb-1 1/1 Running 0 47m - cockroachdb-2 1/1 Running 0 3m - cockroachdb-client-secure 1/1 Running 0 15m - ... - ~~~ - -1. You should also remove the persistent volume that was mounted to the pod. Get the persistent volume claims for the volumes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-cockroachdb-3 Bound pvc-75e561ba-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - ~~~ - -1. Verify that the PVC with the highest number in its name is no longer mounted to a pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe pvc datadir-cockroachdb-3 - ~~~ - - ~~~ - Name: datadir-cockroachdb-3 - ... - Mounted By: - ~~~ - -1. Remove the persistent volume by deleting the PVC: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pvc datadir-cockroachdb-3 - ~~~ - - ~~~ - persistentvolumeclaim "datadir-cockroachdb-3" deleted - ~~~ \ No newline at end of file diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-scale-cluster-helm.md b/src/current/_includes/v22.1/orchestration/kubernetes-scale-cluster-helm.md deleted file mode 100644 index 8556b822651..00000000000 --- a/src/current/_includes/v22.1/orchestration/kubernetes-scale-cluster-helm.md +++ /dev/null @@ -1,118 +0,0 @@ -Before scaling CockroachDB, ensure that your Kubernetes cluster has enough worker nodes to host the number of pods you want to add. This is to ensure that two pods are not placed on the same worker node, as recommended in our [production guidance](recommended-production-settings.html#topology). - -For example, if you want to scale from 3 CockroachDB nodes to 4, your Kubernetes cluster should have at least 4 worker nodes. You can verify the size of your Kubernetes cluster by running `kubectl get nodes`. - -1. Edit your StatefulSet configuration to add another pod for the new CockroachDB node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set statefulset.replicas=4 \ - --reuse-values - ~~~ - - ~~~ - Release "my-release" has been upgraded. Happy Helming! - LAST DEPLOYED: Tue May 14 14:06:43 2019 - NAMESPACE: default - STATUS: DEPLOYED - - RESOURCES: - ==> v1beta1/PodDisruptionBudget - NAME AGE - my-release-cockroachdb-budget 51m - - ==> v1/Pod(related) - - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 38m - my-release-cockroachdb-1 1/1 Running 0 39m - my-release-cockroachdb-2 1/1 Running 0 39m - my-release-cockroachdb-3 0/1 Pending 0 0s - my-release-cockroachdb-init-nwjkh 0/1 Completed 0 39m - - ... - ~~~ - -1. Get the name of the `Pending` CSR for the new pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get csr - ~~~ - - ~~~ - NAME AGE REQUESTOR CONDITION - default.client.root 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-0 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-1 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-2 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-3 2m system:serviceaccount:default:default Pending - node-csr-0Xmb4UTVAWMEnUeGbW4KX1oL4XV_LADpkwjrPtQjlZ4 1h kubelet Approved,Issued - node-csr-NiN8oDsLhxn0uwLTWa0RWpMUgJYnwcFxB984mwjjYsY 1h kubelet Approved,Issued - node-csr-aU78SxyU69pDK57aj6txnevr7X-8M3XgX9mTK0Hso6o 1h kubelet Approved,Issued - ... - ~~~ - - If you do not see a `Pending` CSR, wait a minute and try again. - -1. Examine the CSR for the new pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe csr default.node.my-release-cockroachdb-3 - ~~~ - - ~~~ - Name: default.node.my-release-cockroachdb-3 - Labels: - Annotations: - CreationTimestamp: Thu, 09 Nov 2017 13:39:37 -0500 - Requesting User: system:serviceaccount:default:default - Status: Pending - Subject: - Common Name: node - Serial Number: - Organization: Cockroach - Subject Alternative Names: - DNS Names: localhost - my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local - my-release-cockroachdb-1.my-release-cockroachdb - my-release-cockroachdb-public - my-release-cockroachdb-public.default.svc.cluster.local - IP Addresses: 127.0.0.1 - 10.48.1.6 - Events: - ~~~ - -1. If everything looks correct, approve the CSR for the new pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl certificate approve default.node.my-release-cockroachdb-3 - ~~~ - - ~~~ - certificatesigningrequest.certificates.k8s.io/default.node.my-release-cockroachdb-3 approved - ~~~ - -1. Verify that the new pod started successfully: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 51m - my-release-cockroachdb-1 1/1 Running 0 47m - my-release-cockroachdb-2 1/1 Running 0 3m - my-release-cockroachdb-3 1/1 Running 0 1m - cockroachdb-client-secure 1/1 Running 0 15m - ... - ~~~ - -1. You can also open the [**Node List**](ui-cluster-overview-page.html#node-list) in the DB Console to ensure that the fourth node successfully joined the cluster. \ No newline at end of file diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-scale-cluster-manual.md b/src/current/_includes/v22.1/orchestration/kubernetes-scale-cluster-manual.md deleted file mode 100644 index f42775704d3..00000000000 --- a/src/current/_includes/v22.1/orchestration/kubernetes-scale-cluster-manual.md +++ /dev/null @@ -1,51 +0,0 @@ -Before scaling up CockroachDB, note the following [topology recommendations](recommended-production-settings.html#topology): - -- Each CockroachDB node (running in its own pod) should run on a separate Kubernetes worker node. -- Each availability zone should have the same number of CockroachDB nodes. - -If your cluster has 3 CockroachDB nodes distributed across 3 availability zones (as in our [deployment example](deploy-cockroachdb-with-kubernetes.html?filters=manual)), we recommend scaling up by a multiple of 3 to retain an even distribution of nodes. You should therefore scale up to a minimum of 6 CockroachDB nodes, with 2 nodes in each zone. - -1. Run `kubectl get nodes` to list the worker nodes in your Kubernetes cluster. There should be at least as many worker nodes as pods you plan to add. This ensures that no more than one pod will be placed on each worker node. - -1. Add worker nodes if necessary: - - On GKE, [resize your cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster). If you deployed a [regional cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-regional-cluster) as we recommended, you will use `--num-nodes` to specify the desired number of worker nodes in each zone. For example: - - {% include_cached copy-clipboard.html %} - ~~~ shell - gcloud container clusters resize {cluster-name} --region {region-name} --num-nodes 2 - ~~~ - - On EKS, resize your [Worker Node Group](https://eksctl.io/usage/managing-nodegroups/#scaling). - - On GCE, resize your [Managed Instance Group](https://cloud.google.com/compute/docs/instance-groups/). - - On AWS, resize your [Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html). - -1. Edit your StatefulSet configuration to add pods for each new CockroachDB node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl scale statefulset cockroachdb --replicas=6 - ~~~ - - ~~~ - statefulset.apps/cockroachdb scaled - ~~~ - -1. Verify that the new pod started successfully: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 1/1 Running 0 51m - cockroachdb-1 1/1 Running 0 47m - cockroachdb-2 1/1 Running 0 3m - cockroachdb-3 1/1 Running 0 1m - cockroachdb-4 1/1 Running 0 1m - cockroachdb-5 1/1 Running 0 1m - cockroachdb-client-secure 1/1 Running 0 15m - ... - ~~~ - -1. You can also open the [**Node List**](ui-cluster-overview-page.html#node-list) in the DB Console to ensure that the fourth node successfully joined the cluster. \ No newline at end of file diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-simulate-failure.md b/src/current/_includes/v22.1/orchestration/kubernetes-simulate-failure.md deleted file mode 100644 index 75ea2902627..00000000000 --- a/src/current/_includes/v22.1/orchestration/kubernetes-simulate-failure.md +++ /dev/null @@ -1,91 +0,0 @@ -Based on the `replicas: 3` line in the StatefulSet configuration, Kubernetes ensures that three pods/nodes are running at all times. When a pod/node fails, Kubernetes automatically creates another pod/node with the same network identity and persistent storage. - -To see this in action: - -1. Terminate one of the CockroachDB nodes: - -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod cockroachdb-2 - ~~~ - - ~~~ - pod "cockroachdb-2" deleted - ~~~ - -
- -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod cockroachdb-2 - ~~~ - - ~~~ - pod "cockroachdb-2" deleted - ~~~ - -
- -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod my-release-cockroachdb-2 - ~~~ - - ~~~ - pod "my-release-cockroachdb-2" deleted - ~~~ - -
- - -2. In the DB Console, the **Cluster Overview** will soon show one node as **Suspect**. As Kubernetes auto-restarts the node, watch how the node once again becomes healthy. - -3. Back in the terminal, verify that the pod was automatically restarted: - -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pod cockroachdb-2 - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-2 1/1 Running 0 12s - ~~~ - -
- -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pod cockroachdb-2 - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-2 1/1 Running 0 12s - ~~~ - -
- -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pod my-release-cockroachdb-2 - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-2 1/1 Running 0 44s - ~~~ - -
diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-stop-cluster.md b/src/current/_includes/v22.1/orchestration/kubernetes-stop-cluster.md deleted file mode 100644 index afc17479b82..00000000000 --- a/src/current/_includes/v22.1/orchestration/kubernetes-stop-cluster.md +++ /dev/null @@ -1,145 +0,0 @@ -To shut down the CockroachDB cluster: - -
-{% capture latest_operator_version %}{% include_cached latest_operator_version.md %}{% endcapture %} - -1. Delete the previously created custom resource: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete -f example.yaml - ~~~ - -1. Remove the Operator: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/install/operator.yaml - ~~~ - - This will delete the CockroachDB cluster being run by the Operator. It will *not* delete the persistent volumes that were attached to the pods. - - {{site.data.alerts.callout_danger}} - If you want to delete the persistent volumes and free up the storage used by CockroachDB, be sure you have a backup copy of your data. Data **cannot** be recovered once the persistent volumes are deleted. For more information, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/run-application/delete-stateful-set/#persistent-volumes). - {{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -This does not delete any secrets you may have created. For more information on managing secrets, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl). -{{site.data.alerts.end}} -
- -
-1. Delete the resources associated with the `cockroachdb` label, including the logs and Prometheus and Alertmanager resources: - - {{site.data.alerts.callout_danger}} - This does not include deleting the persistent volumes that were attached to the pods. If you want to delete the persistent volumes and free up the storage used by CockroachDB, be sure you have a backup copy of your data. Data **cannot** be recovered once the persistent volumes are deleted. For more information, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/run-application/delete-stateful-set/#persistent-volumes). - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pods,statefulsets,services,poddisruptionbudget,jobs,rolebinding,clusterrolebinding,role,clusterrole,serviceaccount,alertmanager,prometheus,prometheusrule,serviceMonitor -l app=cockroachdb - ~~~ - - ~~~ - pod "cockroachdb-0" deleted - pod "cockroachdb-1" deleted - pod "cockroachdb-2" deleted - statefulset.apps "alertmanager-cockroachdb" deleted - statefulset.apps "prometheus-cockroachdb" deleted - service "alertmanager-cockroachdb" deleted - service "cockroachdb" deleted - service "cockroachdb-public" deleted - poddisruptionbudget.policy "cockroachdb-budget" deleted - job.batch "cluster-init-secure" deleted - rolebinding.rbac.authorization.k8s.io "cockroachdb" deleted - clusterrolebinding.rbac.authorization.k8s.io "cockroachdb" deleted - clusterrolebinding.rbac.authorization.k8s.io "prometheus" deleted - role.rbac.authorization.k8s.io "cockroachdb" deleted - clusterrole.rbac.authorization.k8s.io "cockroachdb" deleted - clusterrole.rbac.authorization.k8s.io "prometheus" deleted - serviceaccount "cockroachdb" deleted - serviceaccount "prometheus" deleted - alertmanager.monitoring.coreos.com "cockroachdb" deleted - prometheus.monitoring.coreos.com "cockroachdb" deleted - prometheusrule.monitoring.coreos.com "prometheus-cockroachdb-rules" deleted - servicemonitor.monitoring.coreos.com "cockroachdb" deleted - ~~~ - -1. Delete the pod created for `cockroach` client commands, if you didn't do so earlier: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod cockroachdb-client-secure - ~~~ - - ~~~ - pod "cockroachdb-client-secure" deleted - ~~~ - -{{site.data.alerts.callout_info}} -This does not delete any secrets you may have created. For more information on managing secrets, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl). -{{site.data.alerts.end}} -
- -
-1. Uninstall the release: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm uninstall my-release - ~~~ - - ~~~ - release "my-release" deleted - ~~~ - -1. Delete the pod created for `cockroach` client commands, if you didn't do so earlier: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod cockroachdb-client-secure - ~~~ - - ~~~ - pod "cockroachdb-client-secure" deleted - ~~~ - -1. Get the names of any CSRs for the cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get csr - ~~~ - - ~~~ - NAME AGE REQUESTOR CONDITION - default.client.root 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-0 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-1 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-2 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-3 12m system:serviceaccount:default:default Approved,Issued - node-csr-0Xmb4UTVAWMEnUeGbW4KX1oL4XV_LADpkwjrPtQjlZ4 1h kubelet Approved,Issued - node-csr-NiN8oDsLhxn0uwLTWa0RWpMUgJYnwcFxB984mwjjYsY 1h kubelet Approved,Issued - node-csr-aU78SxyU69pDK57aj6txnevr7X-8M3XgX9mTK0Hso6o 1h kubelet Approved,Issued - ... - ~~~ - -1. Delete any CSRs that you created: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete csr default.client.root default.node.my-release-cockroachdb-0 default.node.my-release-cockroachdb-1 default.node.my-release-cockroachdb-2 default.node.my-release-cockroachdb-3 - ~~~ - - ~~~ - certificatesigningrequest "default.client.root" deleted - certificatesigningrequest "default.node.my-release-cockroachdb-0" deleted - certificatesigningrequest "default.node.my-release-cockroachdb-1" deleted - certificatesigningrequest "default.node.my-release-cockroachdb-2" deleted - certificatesigningrequest "default.node.my-release-cockroachdb-3" deleted - ~~~ - - {{site.data.alerts.callout_info}} - This does not delete any secrets you may have created. For more information on managing secrets, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl). - {{site.data.alerts.end}} -
diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-upgrade-cluster-helm.md b/src/current/_includes/v22.1/orchestration/kubernetes-upgrade-cluster-helm.md deleted file mode 100644 index 6c796b28074..00000000000 --- a/src/current/_includes/v22.1/orchestration/kubernetes-upgrade-cluster-helm.md +++ /dev/null @@ -1,257 +0,0 @@ -{% assign previous_version = site.data.versions | where_exp: "previous_version", "previous_version.major_version == page.version.version" | first | map: "previous_version" %} - -1. Verify that you can upgrade. - - To upgrade to a new major version, you must first be on a production release of the previous version. The release does not need to be the latest production release of the previous version, but it must be a production [release](../releases/index.html) and not a testing release (alpha/beta). - - Therefore, in order to upgrade to {{ page.version.version }}, you must be on a production release of {{ previous_version }}. - - 1. If you are upgrading to {{ page.version.version }} from a production release earlier than {{ previous_version }}, or from a testing release (alpha/beta), first [upgrade to a production release of {{ previous_version }}]({% link {{ previous_version }}/upgrade-cockroachdb-kubernetes.md %}?filters=helm). Be sure to complete all the steps. - - 1. Then return to this page and perform a second upgrade to {{ page.version.version }}. - - 1. If you are upgrading from any production release of {{ previous_version }}, or from any earlier {{ page.version.version }} patch release, you do not have to go through intermediate releases; continue to step 2. - -1. Verify the overall health of your cluster using the [DB Console](ui-overview.html). On the **Overview**: - - Under **Node Status**, make sure all nodes that should be live are listed as such. If any nodes are unexpectedly listed as suspect or dead, identify why the nodes are offline and either restart them or [decommission](scale-cockroachdb-kubernetes.html?filters=helm#remove-nodes) them before beginning your upgrade. If there are dead and non-decommissioned nodes in your cluster, it will not be possible to finalize the upgrade (either automatically or manually). - - Under **Replication Status**, make sure there are 0 under-replicated and unavailable ranges. Otherwise, performing a rolling upgrade increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability. Therefore, it's important to [identify and resolve the cause of range under-replication and/or unavailability](cluster-setup-troubleshooting.html#replication-issues) before beginning your upgrade. - - In the **Node List**: - - Make sure all nodes are on the same version. If not all nodes are on the same version, upgrade them to the cluster's highest current version first, and then start this process over. - - Make sure capacity and memory usage are reasonable for each node. Nodes must be able to tolerate some increase in case the new version uses more resources for your workload. Also go to **Metrics > Dashboard: Hardware** and make sure CPU percent is reasonable across the cluster. If there's not enough headroom on any of these metrics, consider [adding nodes](scale-cockroachdb-kubernetes.html?filters=helm#add-nodes) to your cluster before beginning your upgrade. - -{% assign rd = site.data.versions | where_exp: "rd", "rd.major_version == page.version.version" | first %} - -1. Review the [backward-incompatible changes in {{ page.version.version }}](../releases/{{ page.version.version }}.html{% unless rd.release_date == "N/A" or rd.release_date > today %}#{{ page.version.version | replace: ".", "-" }}-0-backward-incompatible-changes{% endunless %}) and [deprecated features](../releases/{{ page.version.version }}.html#{% unless rd.release_date == "N/A" or rd.release_date > today %}{{ page.version.version | replace: ".", "-" }}-0-deprecations{% endunless %}). If any affect your deployment, make the necessary changes before starting the rolling upgrade to {{ page.version.version }}. - -1. Decide how the upgrade will be finalized. - - By default, after all nodes are running the new version, the upgrade process will be **auto-finalized**. This will enable certain [features and performance improvements introduced in {{ page.version.version }}](upgrade-cockroach-version.html#features-that-require-upgrade-finalization). After finalization, however, it will no longer be possible to perform a downgrade to {{ previous_version }}. In the event of a catastrophic failure or corruption, the only option is to start a new cluster using the old binary and then restore from a [backup](take-full-and-incremental-backups.html) created prior to the upgrade. For this reason, **we recommend disabling auto-finalization** so you can monitor the stability and performance of the upgraded cluster before finalizing the upgrade, but note that you will need to follow all of the subsequent directions, including the manual finalization in a later step. - - {{site.data.alerts.callout_info}} - Finalization only applies when performing a major version upgrade (for example, from {{ previous_version }}.x to {{ page.version.version }}). Patch version upgrades (for example, within the {{ page.version.version }}.x series) can always be downgraded. - {{site.data.alerts.end}} - - {% if page.secure == true %} - - 1. Get a shell into the pod with the `cockroach` binary created earlier and start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - - {% else %} - - 1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ - - {% endif %} - - 1. Set the `cluster.preserve_downgrade_option` [cluster setting](cluster-settings.html) to the version you are upgrading from: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING cluster.preserve_downgrade_option = '{{ previous_version | remove_first: "v" }}'; - ~~~ - - 1. Exit the SQL shell and delete the temporary pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -1. Add a [partition](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#staging-an-update) to the update strategy defined in the StatefulSet. Only the pods numbered greater than or equal to the partition value will be updated. For a cluster with 3 pods (e.g., `cockroachdb-0`, `cockroachdb-1`, `cockroachdb-2`) the partition value should be 2: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set statefulset.updateStrategy.rollingUpdate.partition=2 - ~~~ - -1. Kick off the upgrade process by changing the Docker image used in the CockroachDB StatefulSet: - - {{site.data.alerts.callout_info}} - For Helm, you must remove the cluster initialization job from when the cluster was created before the cluster version can be changed. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete job my-release-cockroachdb-init - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set image.tag={{page.release_info.version}} \ - --reuse-values - ~~~ - -1. Check the status of your cluster's pods. You should see one of them being restarted: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 2m - my-release-cockroachdb-1 1/1 Running 0 3m - my-release-cockroachdb-2 0/1 ContainerCreating 0 25s - my-release-cockroachdb-init-nwjkh 0/1 ContainerCreating 0 6s - ... - ~~~ - - {{site.data.alerts.callout_info}} - Ignore the pod for cluster initialization. It is re-created as a byproduct of the StatefulSet configuration but does not impact your existing cluster. - {{site.data.alerts.end}} - -1. After the pod has been restarted with the new image, start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% if page.secure == true %} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - - {% else %} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ - {% endif %} - -1. Run the following SQL query to verify that the number of underreplicated ranges is zero: - - {% include_cached copy-clipboard.html %} - ~~~ sql - SELECT sum((metrics->>'ranges.underreplicated')::DECIMAL)::INT AS ranges_underreplicated FROM crdb_internal.kv_store_status; - ~~~ - - ~~~ - ranges_underreplicated - -------------------------- - 0 - (1 row) - ~~~ - - This indicates that it is safe to proceed to the next pod. - -1. Exit the SQL shell: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -1. Decrement the partition value by 1 to allow the next pod in the cluster to update: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set statefulset.updateStrategy.rollingUpdate.partition=1 \ - ~~~ - -1. Repeat steps 4-8 until all pods have been restarted and are running the new image (the final partition value should be `0`). - -1. Check the image of each pod to confirm that all have been upgraded: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods \ - -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}' - ~~~ - - ~~~ - my-release-cockroachdb-0 cockroachdb/cockroach:{{page.release_info.version}} - my-release-cockroachdb-1 cockroachdb/cockroach:{{page.release_info.version}} - my-release-cockroachdb-2 cockroachdb/cockroach:{{page.release_info.version}} - ... - ~~~ - - You can also check the CockroachDB version of each node in the [DB Console](ui-cluster-overview-page.html#node-details). - - -1. If you disabled auto-finalization earlier, monitor the stability and performance of your cluster until you are comfortable with the upgrade (generally at least a day). - - If you decide to roll back the upgrade, repeat the rolling restart procedure with the old binary. - - {{site.data.alerts.callout_info}} - This is only possible when performing a major version upgrade (for example, from {{ previous_version }}.x to {{ page.version.version }}). Patch version upgrades (for example, within the {{ page.version.version }}.x series) are auto-finalized. - {{site.data.alerts.end}} - - To finalize the upgrade, re-enable auto-finalization: - - {% if page.secure == true %} - - 1. Get a shell into the pod with the `cockroach` binary created earlier and start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - - {% else %} - - 1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ - - {% endif %} - - 2. Re-enable auto-finalization: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > RESET CLUSTER SETTING cluster.preserve_downgrade_option; - ~~~ - - 3. Exit the SQL shell and delete the temporary pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-upgrade-cluster-manual.md b/src/current/_includes/v22.1/orchestration/kubernetes-upgrade-cluster-manual.md deleted file mode 100644 index 0e4fb1b59ca..00000000000 --- a/src/current/_includes/v22.1/orchestration/kubernetes-upgrade-cluster-manual.md +++ /dev/null @@ -1,246 +0,0 @@ -{% assign previous_version = site.data.versions | where_exp: "previous_version", "previous_version.major_version == page.version.version" | first | map: "previous_version" %} - -1. Verify that you can upgrade. - - To upgrade to a new major version, you must first be on a production release of the previous version. The release does not need to be the latest production release of the previous version, but it must be a production [release](../releases/index.html) and not a testing release (alpha/beta). - - Therefore, in order to upgrade to {{ page.version.version }}, you must be on a production release of {{ previous_version }}. - - 1. If you are upgrading to {{ page.version.version }} from a production release earlier than {{ previous_version }}, or from a testing release (alpha/beta), first [upgrade to a production release of {{ previous_version }}]({% link {{ previous_version }}/upgrade-cockroachdb-kubernetes.md %}?filters=manual). Be sure to complete all the steps. - - 1. Then return to this page and perform a second upgrade to {{ page.version.version }}. - - 1. If you are upgrading from any production release of {{ previous_version }}, or from any earlier {{ page.version.version }} patch release, you do not have to go through intermediate releases; continue to step 2. - -1. Verify the overall health of your cluster using the [DB Console](ui-overview.html). On the **Overview**: - - Under **Node Status**, make sure all nodes that should be live are listed as such. If any nodes are unexpectedly listed as suspect or dead, identify why the nodes are offline and either restart them or [decommission](scale-cockroachdb-kubernetes.html?filters=manual#remove-nodes) them before beginning your upgrade. If there are dead and non-decommissioned nodes in your cluster, it will not be possible to finalize the upgrade (either automatically or manually). - - Under **Replication Status**, make sure there are 0 under-replicated and unavailable ranges. Otherwise, performing a rolling upgrade increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability. Therefore, it's important to [identify and resolve the cause of range under-replication and/or unavailability](cluster-setup-troubleshooting.html#replication-issues) before beginning your upgrade. - - In the **Node List**: - - Make sure all nodes are on the same version. If not all nodes are on the same version, upgrade them to the cluster's highest current version first, and then start this process over. - - Make sure capacity and memory usage are reasonable for each node. Nodes must be able to tolerate some increase in case the new version uses more resources for your workload. Also go to **Metrics > Dashboard: Hardware** and make sure CPU percent is reasonable across the cluster. If there's not enough headroom on any of these metrics, consider [adding nodes](scale-cockroachdb-kubernetes.html?filters=manual#add-nodes) to your cluster before beginning your upgrade. - -{% assign rd = site.data.versions | where_exp: "rd", "rd.major_version == page.version.version" | first %} - -1. Review the [backward-incompatible changes in {{ page.version.version }}](../releases/{{ page.version.version }}.html{% unless rd.release_date == "N/A" or rd.release_date > today %}#{{ page.version.version | replace: ".", "-" }}-0-backward-incompatible-changes{% endunless %}) and [deprecated features](../releases/{{ page.version.version }}.html#{% unless rd.release_date == "N/A" or rd.release_date > today %}{{ page.version.version | replace: ".", "-" }}-0-deprecations{% endunless %}). If any affect your deployment, make the necessary changes before starting the rolling upgrade to {{ page.version.version }}. - -1. Decide how the upgrade will be finalized. - - By default, after all nodes are running the new version, the upgrade process will be **auto-finalized**. This will enable certain [features and performance improvements introduced in {{ page.version.version }}](upgrade-cockroach-version.html#features-that-require-upgrade-finalization). After finalization, however, it will no longer be possible to perform a downgrade to {{ previous_version }}. In the event of a catastrophic failure or corruption, the only option is to start a new cluster using the old binary and then restore from a [backup](take-full-and-incremental-backups.html) created prior to the upgrade. For this reason, **we recommend disabling auto-finalization** so you can monitor the stability and performance of the upgraded cluster before finalizing the upgrade, but note that you will need to follow all of the subsequent directions, including the manual finalization in a later step. - - {{site.data.alerts.callout_info}} - Finalization only applies when performing a major version upgrade (for example, from {{ previous_version }}.x to {{ page.version.version }}). Patch version upgrades (for example, within the {{ page.version.version }}.x series) can always be downgraded. - {{site.data.alerts.end}} - - {% if page.secure == true %} - - 1. Start the CockroachDB [built-in SQL client](cockroach-sql.html). For example, if you followed the steps in [Deploy CockroachDB with Kubernetes](deploy-cockroachdb-with-kubernetes.html?filters=manual#step-3-use-the-built-in-sql-client) to launch a secure client pod, get a shell into the `cockroachdb-client-secure` pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \-- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - {% else %} - - 1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=cockroachdb-public - ~~~ - - {% endif %} - - 1. Set the `cluster.preserve_downgrade_option` [cluster setting](cluster-settings.html) to the version you are upgrading from: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING cluster.preserve_downgrade_option = '{{ previous_version | remove_first: "v" }}'; - ~~~ - - 1. Exit the SQL shell and delete the temporary pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -1. Add a [partition](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#staging-an-update) to the update strategy defined in the StatefulSet. Only the pods numbered greater than or equal to the partition value will be updated. For a cluster with 3 pods (e.g., `cockroachdb-0`, `cockroachdb-1`, `cockroachdb-2`) the partition value should be 2: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch statefulset cockroachdb \ - -p='{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":2}}}}' - ~~~ - - ~~~ - statefulset.apps/cockroachdb patched - ~~~ - -1. Kick off the upgrade process by changing the Docker image used in the CockroachDB StatefulSet: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch statefulset cockroachdb \ - --type='json' \ - -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cockroachdb/cockroach:{{page.release_info.version}}"}]' - ~~~ - - ~~~ - statefulset.apps/cockroachdb patched - ~~~ - -1. Check the status of your cluster's pods. You should see one of them being restarted: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 1/1 Running 0 2m - cockroachdb-1 1/1 Running 0 2m - cockroachdb-2 0/1 Terminating 0 1m - ... - ~~~ - -1. After the pod has been restarted with the new image, start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% if page.secure == true %} - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \-- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - {% else %} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=cockroachdb-public - ~~~ - - {% endif %} - -1. Run the following SQL query to verify that the number of underreplicated ranges is zero: - - {% include_cached copy-clipboard.html %} - ~~~ sql - SELECT sum((metrics->>'ranges.underreplicated')::DECIMAL)::INT AS ranges_underreplicated FROM crdb_internal.kv_store_status; - ~~~ - - ~~~ - ranges_underreplicated - -------------------------- - 0 - (1 row) - ~~~ - - This indicates that it is safe to proceed to the next pod. - -1. Exit the SQL shell: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -1. Decrement the partition value by 1 to allow the next pod in the cluster to update: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch statefulset cockroachdb \ - -p='{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":1}}}}' - ~~~ - - ~~~ - statefulset.apps/cockroachdb patched - ~~~ - -1. Repeat steps 4-8 until all pods have been restarted and are running the new image (the final partition value should be `0`). - -1. Check the image of each pod to confirm that all have been upgraded: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods \ - -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}' - ~~~ - - ~~~ - cockroachdb-0 cockroachdb/cockroach:{{page.release_info.version}} - cockroachdb-1 cockroachdb/cockroach:{{page.release_info.version}} - cockroachdb-2 cockroachdb/cockroach:{{page.release_info.version}} - ... - ~~~ - - You can also check the CockroachDB version of each node in the [DB Console](ui-cluster-overview-page.html#node-details). - -1. If you disabled auto-finalization earlier, monitor the stability and performance of your cluster until you are comfortable with the upgrade (generally at least a day). - - If you decide to roll back the upgrade, repeat the rolling restart procedure with the old binary. - - {{site.data.alerts.callout_info}} - This is only possible when performing a major version upgrade (for example, from {{ previous_version }}.x to {{ page.version.version }}). Patch version upgrades (for example, within the {{ page.version.version }}.x series) are auto-finalized. - {{site.data.alerts.end}} - - To finalize the upgrade, re-enable auto-finalization: - - {% if page.secure == true %} - - 1. Start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - {% else %} - - 1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=cockroachdb-public - ~~~ - - {% endif %} - - 2. Re-enable auto-finalization: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > RESET CLUSTER SETTING cluster.preserve_downgrade_option; - ~~~ - - 3. Exit the SQL shell and delete the temporary pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ diff --git a/src/current/_includes/v22.1/orchestration/local-start-kubernetes.md b/src/current/_includes/v22.1/orchestration/local-start-kubernetes.md deleted file mode 100644 index e504d052dbe..00000000000 --- a/src/current/_includes/v22.1/orchestration/local-start-kubernetes.md +++ /dev/null @@ -1,24 +0,0 @@ -## Before you begin - -Before getting started, it's helpful to review some Kubernetes-specific terminology: - -Feature | Description ---------|------------ -[minikube](http://kubernetes.io/docs/getting-started-guides/minikube/) | This is the tool you'll use to run a Kubernetes cluster inside a VM on your local workstation. -[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one of more Docker containers. In this tutorial, all pods will run on your local workstation, each containing one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4. -[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5. -[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is a piece of storage mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.

When using `minikube`, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted. -[persistent volume claim](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | When pods are created (one per CockroachDB node), each pod will request a persistent volume claim to “claim” durable storage for its node. - -## Step 1. Start Kubernetes - -1. Follow Kubernetes' [documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) to install `minikube`, the tool used to run Kubernetes locally, for your OS. This includes installing a hypervisor and `kubectl`, the command-line tool used to manage Kubernetes from your local workstation. - - {{site.data.alerts.callout_info}}Make sure you install minikube version 0.21.0 or later. Earlier versions do not include a Kubernetes server that supports the maxUnavailability field and PodDisruptionBudget resource type used in the CockroachDB StatefulSet configuration.{{site.data.alerts.end}} - -2. Start a local Kubernetes cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ minikube start - ~~~ diff --git a/src/current/_includes/v22.1/orchestration/monitor-cluster.md b/src/current/_includes/v22.1/orchestration/monitor-cluster.md deleted file mode 100644 index 94043bf91ea..00000000000 --- a/src/current/_includes/v22.1/orchestration/monitor-cluster.md +++ /dev/null @@ -1,110 +0,0 @@ -To access the cluster's [DB Console](ui-overview.html): - -{% if page.secure == true %} - -1. On secure clusters, [certain pages of the DB Console](ui-overview.html#db-console-access) can only be accessed by `admin` users. - - Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html): - -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach/cockroach-certs \ - --host=cockroachdb-public - ~~~ - -
- -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - -
- -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - -
- -1. Assign `roach` to the `admin` role (you only need to do this once): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > GRANT admin TO roach; - ~~~ - -1. Exit the SQL shell and pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -{% endif %} - -1. In a new terminal window, port-forward from your local machine to the `cockroachdb-public` service: - -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl port-forward service/cockroachdb-public 8080 - ~~~ - -
- -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl port-forward service/cockroachdb-public 8080 - ~~~ - -
- -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl port-forward service/my-release-cockroachdb-public 8080 - ~~~ - -
- - ~~~ - Forwarding from 127.0.0.1:8080 -> 8080 - ~~~ - - {{site.data.alerts.callout_info}}The port-forward command must be run on the same machine as the web browser in which you want to view the DB Console. If you have been running these commands from a cloud instance or other non-local shell, you will not be able to view the UI without configuring kubectl locally and running the above port-forward command on your local machine.{{site.data.alerts.end}} - -{% if page.secure == true %} - -1. Go to https://localhost:8080 and log in with the username and password you created earlier. - - {% include {{ page.version.version }}/misc/chrome-localhost.md %} - -{% else %} - -1. Go to http://localhost:8080. - -{% endif %} - -1. In the UI, verify that the cluster is running as expected: - - View the [Node List](ui-cluster-overview-page.html#node-list) to ensure that all nodes successfully joined the cluster. - - Click the **Databases** tab on the left to verify that `bank` is listed. diff --git a/src/current/_includes/v22.1/orchestration/operator-check-namespace.md b/src/current/_includes/v22.1/orchestration/operator-check-namespace.md deleted file mode 100644 index d6c70aa03dc..00000000000 --- a/src/current/_includes/v22.1/orchestration/operator-check-namespace.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -All `kubectl` steps should be performed in the [namespace where you installed the Operator](deploy-cockroachdb-with-kubernetes.html#install-the-operator). By default, this is `cockroach-operator-system`. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v22.1/orchestration/start-cockroachdb-helm-insecure.md b/src/current/_includes/v22.1/orchestration/start-cockroachdb-helm-insecure.md deleted file mode 100644 index e78276828f0..00000000000 --- a/src/current/_includes/v22.1/orchestration/start-cockroachdb-helm-insecure.md +++ /dev/null @@ -1,115 +0,0 @@ -{{site.data.alerts.callout_danger}} -The CockroachDB Helm chart is undergoing maintenance for compatibility with Kubernetes versions 1.17 through 1.21 (the latest version as of this writing). No new feature development is currently planned. For new production and local deployments, we currently recommend using a manual configuration (**Configs** option). If you are experiencing issues with a Helm deployment on production, contact our [Support team](https://support.cockroachlabs.com/). -{{site.data.alerts.end}} - -1. [Install the Helm client](https://helm.sh/docs/intro/install) (version 3.0 or higher) and add the `cockroachdb` chart repository: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm repo add cockroachdb https://charts.cockroachdb.com/ - ~~~ - - ~~~ - "cockroachdb" has been added to your repositories - ~~~ - -1. Update your Helm chart repositories to ensure that you're using the [latest CockroachDB chart](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/Chart.yaml): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm repo update - ~~~ - -1. Modify our Helm chart's [`values.yaml`](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/values.yaml) parameters for your deployment scenario. - - Create a `my-values.yaml` file to override the defaults in `values.yaml`, substituting your own values in this example based on the guidelines below. - - {% include_cached copy-clipboard.html %} - ~~~ - statefulset: - resources: - limits: - memory: "8Gi" - requests: - memory: "8Gi" - conf: - cache: "2Gi" - max-sql-memory: "2Gi" - ~~~ - - 1. To avoid running out of memory when CockroachDB is not the only pod on a Kubernetes node, you *must* set memory limits explicitly. This is because CockroachDB does not detect the amount of memory allocated to its pod when run in Kubernetes. We recommend setting `conf.cache` and `conf.max-sql-memory` each to 1/4 of the `memory` allocation specified in `statefulset.resources.requests` and `statefulset.resources.limits`. - - {{site.data.alerts.callout_success}} - For example, if you are allocating 8Gi of `memory` to each CockroachDB node, allocate 2Gi to `cache` and 2Gi to `max-sql-memory`. - {{site.data.alerts.end}} - -1. For an insecure deployment, set `tls.enabled` to `false`. For clarity, this example includes the example configuration from the previous steps. - - {% include_cached copy-clipboard.html %} - ~~~ - statefulset: - resources: - limits: - memory: "8Gi" - requests: - memory: "8Gi" - conf: - cache: "2Gi" - max-sql-memory: "2Gi" - tls: - enabled: false - ~~~ - - 1. You may want to modify `storage.persistentVolume.size` and `storage.persistentVolume.storageClass` for your use case. This chart defaults to 100Gi of disk space per pod. For more details on customizing disks for performance, see [these instructions](kubernetes-performance.html#disk-type). - - {{site.data.alerts.callout_info}} - If necessary, you can [expand disk size](/docs/{{ page.version.version }}/configure-cockroachdb-kubernetes.html?filters=helm#expand-disk-size) after the cluster is live. - {{site.data.alerts.end}} - -1. Install the CockroachDB Helm chart. - - Provide a "release" name to identify and track this particular deployment of the chart, and override the default values with those in `my-values.yaml`. - - {{site.data.alerts.callout_info}} - This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm install my-release --values my-values.yaml cockroachdb/cockroachdb - ~~~ - - Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. - -1. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 8m - my-release-cockroachdb-1 1/1 Running 0 8m - my-release-cockroachdb-2 1/1 Running 0 8m - my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h - ~~~ - -1. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pv - ~~~ - - ~~~ - NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE - pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m - pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m - pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to logs for a pod, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/orchestration/start-cockroachdb-helm-secure.md b/src/current/_includes/v22.1/orchestration/start-cockroachdb-helm-secure.md deleted file mode 100644 index cd8ac2e7b46..00000000000 --- a/src/current/_includes/v22.1/orchestration/start-cockroachdb-helm-secure.md +++ /dev/null @@ -1,112 +0,0 @@ -The CockroachDB Helm chart is compatible with Kubernetes versions 1.22 and earlier. - -The CockroachDB Helm chart is currently not under active development, and no new features are planned. However, Cockroach Labs remains committed to fully supporting the Helm chart by addressing defects, providing security patches, and addressing breaking changes due to deprecations in Kubernetes APIs. - -A deprecation notice for the Helm chart will be provided to customers a minimum of 6 months in advance of actual deprecation. - -{{site.data.alerts.callout_danger}} -If you are running a secure Helm deployment on Kubernetes 1.22 and later, you must migrate away from using the Kubernetes CA for cluster authentication. For details, see [Certificate management](secure-cockroachdb-kubernetes.html?filters=helm#migration-to-self-signer). -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -Secure CockroachDB deployments on Amazon EKS via Helm are [not yet supported](https://github.com/cockroachdb/cockroach/issues/38847). -{{site.data.alerts.end}} - -1. [Install the Helm client](https://helm.sh/docs/intro/install) (version 3.0 or higher) and add the `cockroachdb` chart repository: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm repo add cockroachdb https://charts.cockroachdb.com/ - ~~~ - - ~~~ - "cockroachdb" has been added to your repositories - ~~~ - -1. Update your Helm chart repositories to ensure that you're using the [latest CockroachDB chart](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/Chart.yaml): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm repo update - ~~~ - -1. The cluster configuration is set in the Helm chart's [values file](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/values.yaml). - - {{site.data.alerts.callout_info}} - By default, the Helm chart specifies CPU and memory resources that are appropriate for the virtual machines used in this deployment example. On a production cluster, you should substitute values that are appropriate for your machines and workload. For details on configuring your deployment, see [Configure the Cluster](configure-cockroachdb-kubernetes.html?filters=helm). - {{site.data.alerts.end}} - - Before deploying, modify some parameters in our Helm chart's [values file](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/values.yaml): - - 1. Create a local YAML file (e.g., `my-values.yaml`) to specify your custom values. These will be used to override the defaults in `values.yaml`. - - 1. To avoid running out of memory when CockroachDB is not the only pod on a Kubernetes node, you *must* set memory limits explicitly. This is because CockroachDB does not detect the amount of memory allocated to its pod when run in Kubernetes. We recommend setting `conf.cache` and `conf.max-sql-memory` each to 1/4 of the `memory` allocation specified in `statefulset.resources.requests` and `statefulset.resources.limits`. - - {{site.data.alerts.callout_success}} - For example, if you are allocating 8Gi of `memory` to each CockroachDB node, allocate 2Gi to `cache` and 2Gi to `max-sql-memory`. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ yaml - conf: - cache: "2Gi" - max-sql-memory: "2Gi" - ~~~ - - The Helm chart defaults to a secure deployment by automatically setting `tls.enabled` to `true`. - - {{site.data.alerts.callout_info}} - By default, the Helm chart will generate and sign 1 client and 1 node certificate to secure the cluster. To authenticate using your own CA, see [Certificate management](/docs/{{ page.version.version }}/secure-cockroachdb-kubernetes.html?filters=helm#use-a-custom-ca). - {{site.data.alerts.end}} - -1. Install the CockroachDB Helm chart, specifying your custom values file. - - Provide a "release" name to identify and track this particular deployment of the chart, and override the default values with those in `my-values.yaml`. - - {{site.data.alerts.callout_info}} - This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands. - {{site.data.alerts.end}} - - {{site.data.alerts.callout_danger}} - To allow the CockroachDB pods to successfully deploy, do not set the [`--wait` flag](https://helm.sh/docs/intro/using_helm/#helpful-options-for-installupgraderollback) when using Helm commands. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm install my-release --values {custom-values}.yaml cockroachdb/cockroachdb - ~~~ - - Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. - -1. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 8m - my-release-cockroachdb-1 1/1 Running 0 8m - my-release-cockroachdb-2 1/1 Running 0 8m - my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h - ~~~ - -1. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pv - ~~~ - - ~~~ - NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE - pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m - pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m - pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to logs for a pod, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/orchestration/start-cockroachdb-insecure.md b/src/current/_includes/v22.1/orchestration/start-cockroachdb-insecure.md deleted file mode 100644 index c0692798b67..00000000000 --- a/src/current/_includes/v22.1/orchestration/start-cockroachdb-insecure.md +++ /dev/null @@ -1,114 +0,0 @@ -1. From your local workstation, use our [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it. - - Download [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml - ~~~ - - {{site.data.alerts.callout_info}} - By default, this manifest specifies CPU and memory resources that are appropriate for the virtual machines used in this deployment example. On a production cluster, you should substitute values that are appropriate for your machines and workload. For details on configuring your deployment, see [Resource management](configure-cockroachdb-kubernetes.html?filters=manual). - {{site.data.alerts.end}} - - Use the file to create the StatefulSet and start the cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create -f cockroachdb-statefulset.yaml - ~~~ - - ~~~ - service/cockroachdb-public created - service/cockroachdb created - poddisruptionbudget.policy/cockroachdb-budget created - statefulset.apps/cockroachdb created - ~~~ - - Alternatively, if you'd rather start with a configuration file that has been customized for performance: - - 1. Download our [performance version of `cockroachdb-statefulset-insecure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml - ~~~ - - 2. Modify the file wherever there is a `TODO` comment. - - 3. Use the file to create the StatefulSet and start the cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create -f cockroachdb-statefulset-insecure.yaml - ~~~ - -2. Confirm that three pods are `Running` successfully. Note that they will not - be considered `Ready` until after the cluster has been initialized: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 0/1 Running 0 2m - cockroachdb-1 0/1 Running 0 2m - cockroachdb-2 0/1 Running 0 2m - ~~~ - -3. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get persistentvolumes - ~~~ - - ~~~ - NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE - pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-0 26s - pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-1 27s - pvc-5315efda-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-2 27s - ~~~ - -4. Use our [`cluster-init.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml) file to perform a one-time initialization that joins the CockroachDB nodes into a single cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create \ - -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml - ~~~ - - ~~~ - job.batch/cluster-init created - ~~~ - -5. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get job cluster-init - ~~~ - - ~~~ - NAME COMPLETIONS DURATION AGE - cluster-init 1/1 7s 27s - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cluster-init-cqf8l 0/1 Completed 0 56s - cockroachdb-0 1/1 Running 0 7m51s - cockroachdb-1 1/1 Running 0 7m51s - cockroachdb-2 1/1 Running 0 7m51s - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/orchestration/start-cockroachdb-local-helm-insecure.md b/src/current/_includes/v22.1/orchestration/start-cockroachdb-local-helm-insecure.md deleted file mode 100644 index 494b3e6207e..00000000000 --- a/src/current/_includes/v22.1/orchestration/start-cockroachdb-local-helm-insecure.md +++ /dev/null @@ -1,65 +0,0 @@ -1. [Install the Helm client](https://helm.sh/docs/intro/install) (version 3.0 or higher) and add the `cockroachdb` chart repository: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm repo add cockroachdb https://charts.cockroachdb.com/ - ~~~ - - ~~~ - "cockroachdb" has been added to your repositories - ~~~ - -2. Update your Helm chart repositories to ensure that you're using the [latest CockroachDB chart](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/Chart.yaml): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm repo update - ~~~ - -3. Install the CockroachDB Helm chart. - - Provide a "release" name to identify and track this particular deployment of the chart. - - {{site.data.alerts.callout_info}} - This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm install my-release cockroachdb/cockroachdb - ~~~ - - Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. - -4. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 8m - my-release-cockroachdb-1 1/1 Running 0 8m - my-release-cockroachdb-2 1/1 Running 0 8m - my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h - ~~~ - -5. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pv - ~~~ - - ~~~ - NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE - pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m - pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m - pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/orchestration/start-cockroachdb-local-insecure.md b/src/current/_includes/v22.1/orchestration/start-cockroachdb-local-insecure.md deleted file mode 100644 index 37fe8e46939..00000000000 --- a/src/current/_includes/v22.1/orchestration/start-cockroachdb-local-insecure.md +++ /dev/null @@ -1,83 +0,0 @@ -1. From your local workstation, use our [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml - ~~~ - - ~~~ - service/cockroachdb-public created - service/cockroachdb created - poddisruptionbudget.policy/cockroachdb-budget created - statefulset.apps/cockroachdb created - ~~~ - -2. Confirm that three pods are `Running` successfully. Note that they will not - be considered `Ready` until after the cluster has been initialized: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 0/1 Running 0 2m - cockroachdb-1 0/1 Running 0 2m - cockroachdb-2 0/1 Running 0 2m - ~~~ - -3. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pv - ~~~ - - ~~~ - NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE - pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-0 26s - pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-1 27s - pvc-5315efda-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-2 27s - ~~~ - -4. Use our [`cluster-init.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml) file to perform a one-time initialization that joins the CockroachDB nodes into a single cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create \ - -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml - ~~~ - - ~~~ - job.batch/cluster-init created - ~~~ - -5. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get job cluster-init - ~~~ - - ~~~ - NAME COMPLETIONS DURATION AGE - cluster-init 1/1 7s 27s - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cluster-init-cqf8l 0/1 Completed 0 56s - cockroachdb-0 1/1 Running 0 7m51s - cockroachdb-1 1/1 Running 0 7m51s - cockroachdb-2 1/1 Running 0 7m51s - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/orchestration/start-cockroachdb-operator-secure.md b/src/current/_includes/v22.1/orchestration/start-cockroachdb-operator-secure.md deleted file mode 100644 index bb8c3b445e6..00000000000 --- a/src/current/_includes/v22.1/orchestration/start-cockroachdb-operator-secure.md +++ /dev/null @@ -1,125 +0,0 @@ -### Install the Operator - -{% capture latest_operator_version %}{% include_cached latest_operator_version.md %}{% endcapture %} - -1. Apply the [custom resource definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) for the Operator: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/install/crds.yaml - ~~~ - - ~~~ - customresourcedefinition.apiextensions.k8s.io/crdbclusters.crdb.cockroachlabs.com created - ~~~ - -1. By default, the Operator is configured to install in the `cockroach-operator-system` namespace and to manage CockroachDB instances for all namespaces on the cluster. - - If you'd like to change either of these defaults: - - 1. Download the Operator manifest: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/install/operator.yaml - ~~~ - - 1. To use a custom namespace, edit all instances of `namespace: cockroach-operator-system` with your desired namespace. - - 1. To limit the namespaces that will be monitored, set the `WATCH_NAMESPACE` environment variable in the `Deployment` pod spec. This can be set to a single namespace, or a comma-delimited set of namespaces. When set, only those `CrdbCluster` resources in the supplied namespace(s) will be reconciled. - - 1. Instead of using the command below, apply your local version of the Operator manifest to the cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl apply -f operator.yaml - ~~~ - - If you want to use the default namespace settings: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/install/operator.yaml - ~~~ - - ~~~ - clusterrole.rbac.authorization.k8s.io/cockroach-database-role created - serviceaccount/cockroach-database-sa created - clusterrolebinding.rbac.authorization.k8s.io/cockroach-database-rolebinding created - role.rbac.authorization.k8s.io/cockroach-operator-role created - clusterrolebinding.rbac.authorization.k8s.io/cockroach-operator-rolebinding created - clusterrole.rbac.authorization.k8s.io/cockroach-operator-role created - serviceaccount/cockroach-operator-sa created - rolebinding.rbac.authorization.k8s.io/cockroach-operator-default created - deployment.apps/cockroach-operator created - ~~~ - -1. Set your current namespace to the one used by the Operator. For example, to use the Operator's default namespace: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl config set-context --current --namespace=cockroach-operator-system - ~~~ - -1. Validate that the Operator is running: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroach-operator-6f7b86ffc4-9ppkv 1/1 Running 0 54s - ~~~ - -### Initialize the cluster - -{{site.data.alerts.callout_info}} -After a cluster managed by the Kubernetes operator is initialized, its Kubernetes labels cannot be modified. For more details, refer to [Limitations](#limitations). -{{site.data.alerts.end}} - -1. Download `example.yaml`, a custom resource that tells the Operator how to configure the Kubernetes cluster. - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/examples/example.yaml - ~~~ - - {{site.data.alerts.callout_info}} - By default, this custom resource specifies CPU and memory resources that are appropriate for the virtual machines used in this deployment example. On a production cluster, you should substitute values that are appropriate for your machines and workload. For details on configuring your deployment, see [Configure the Cluster](configure-cockroachdb-kubernetes.html). - {{site.data.alerts.end}} - - {{site.data.alerts.callout_info}} - By default, the Operator will generate and sign 1 client and 1 node certificate to secure the cluster. This means that if you do not provide a CA, a `cockroach`-generated CA is used. If you want to authenticate using your own CA, [specify the generated secrets in the custom resource](secure-cockroachdb-kubernetes.html#use-a-custom-ca) **before** proceeding to the next step. - {{site.data.alerts.end}} - -1. Apply `example.yaml`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl apply -f example.yaml - ~~~ - - The Operator will create a StatefulSet and initialize the nodes as a cluster. - - ~~~ - crdbcluster.crdb.cockroachlabs.com/cockroachdb created - ~~~ - -1. Check that the pods were created: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroach-operator-6f7b86ffc4-9t9zb 1/1 Running 0 3m22s - cockroachdb-0 1/1 Running 0 2m31s - cockroachdb-1 1/1 Running 0 102s - cockroachdb-2 1/1 Running 0 46s - ~~~ - - Each pod should have `READY` status soon after being created. diff --git a/src/current/_includes/v22.1/orchestration/start-cockroachdb-secure.md b/src/current/_includes/v22.1/orchestration/start-cockroachdb-secure.md deleted file mode 100644 index 972cabc2d8e..00000000000 --- a/src/current/_includes/v22.1/orchestration/start-cockroachdb-secure.md +++ /dev/null @@ -1,108 +0,0 @@ -### Configure the cluster - -1. Download and modify our [StatefulSet configuration](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/bring-your-own-certs/cockroachdb-statefulset.yaml): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/bring-your-own-certs/cockroachdb-statefulset.yaml - ~~~ - -1. Update `secretName` with the name of the corresponding node secret. - - The secret names depend on your method for generating secrets. For example, if you follow the below [steps using `cockroach cert`](#create-certificates), use this secret name: - - {% include_cached copy-clipboard.html %} - ~~~ yaml - secret: - secretName: cockroachdb.node - ~~~ - -1. The StatefulSet configuration deploys CockroachDB into the `default` namespace. To use a different namespace, search for `kind: RoleBinding` and change its `subjects.namespace` property to the name of the namespace. Otherwise, a `failed to read secrets` error occurs when you attempt to follow the steps in [Initialize the cluster](#initialize-the-cluster). - -{{site.data.alerts.callout_info}} -By default, this manifest specifies CPU and memory resources that are appropriate for the virtual machines used in this deployment example. On a production cluster, you should substitute values that are appropriate for your machines and workload. For details on configuring your deployment, see [Configure the Cluster](configure-cockroachdb-kubernetes.html?filters=manual). -{{site.data.alerts.end}} - -### Create certificates - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} - -{% include {{ page.version.version }}/orchestration/kubernetes-cockroach-cert.md %} - -### Initialize the cluster - -1. Use the config file you downloaded to create the StatefulSet that automatically creates 3 pods, each running a CockroachDB node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create -f cockroachdb-statefulset.yaml - ~~~ - - ~~~ - serviceaccount/cockroachdb created - role.rbac.authorization.k8s.io/cockroachdb created - rolebinding.rbac.authorization.k8s.io/cockroachdb created - service/cockroachdb-public created - service/cockroachdb created - poddisruptionbudget.policy/cockroachdb-budget created - statefulset.apps/cockroachdb created - ~~~ - -1. Initialize the CockroachDB cluster: - - 1. Confirm that three pods are `Running` successfully. Note that they will not be considered `Ready` until after the cluster has been initialized: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 0/1 Running 0 2m - cockroachdb-1 0/1 Running 0 2m - cockroachdb-2 0/1 Running 0 2m - ~~~ - - 1. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pv - ~~~ - - ~~~ - NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE - pvc-9e435563-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-0 standard 51m - pvc-9e47d820-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-1 standard 51m - pvc-9e4f57f0-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-2 standard 51m - ~~~ - - 1. Run `cockroach init` on one of the pods to complete the node startup process and have them join together as a cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-0 \ - -- /cockroach/cockroach init \ - --certs-dir=/cockroach/cockroach-certs - ~~~ - - ~~~ - Cluster successfully initialized - ~~~ - - 1. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 1/1 Running 0 3m - cockroachdb-1 1/1 Running 0 3m - cockroachdb-2 1/1 Running 0 3m - ~~~ \ No newline at end of file diff --git a/src/current/_includes/v22.1/orchestration/start-kubernetes.md b/src/current/_includes/v22.1/orchestration/start-kubernetes.md deleted file mode 100644 index 5168d470465..00000000000 --- a/src/current/_includes/v22.1/orchestration/start-kubernetes.md +++ /dev/null @@ -1,98 +0,0 @@ -You can use the hosted [Google Kubernetes Engine (GKE)](#hosted-gke) service or the hosted [Amazon Elastic Kubernetes Service (EKS)](#hosted-eks) to quickly start Kubernetes. - -{{site.data.alerts.callout_info}} -GKE or EKS are not required to run CockroachDB on Kubernetes. A manual GCE or AWS cluster with the [minimum recommended Kubernetes version](#kubernetes-version) and at least 3 pods, each presenting [sufficient resources](#resources) to start a CockroachDB node, can also be used. -{{site.data.alerts.end}} - -### Hosted GKE - -1. Complete the **Before You Begin** steps described in the [Google Kubernetes Engine Quickstart](https://cloud.google.com/kubernetes-engine/docs/quickstart) documentation. - - This includes installing `gcloud`, which is used to create and delete Kubernetes Engine clusters, and `kubectl`, which is the command-line tool used to manage Kubernetes from your workstation. - - {{site.data.alerts.callout_success}} - The documentation offers the choice of using Google's Cloud Shell product or using a local shell on your machine. Choose to use a local shell if you want to be able to view the DB Console using the steps in this guide. - {{site.data.alerts.end}} - -2. From your local workstation, start the Kubernetes cluster, specifying one of the available [regions](https://cloud.google.com/compute/docs/regions-zones#available) (e.g., `us-east1`): - - {{site.data.alerts.callout_success}} - Since this region can differ from your default `gcloud` region, be sure to include the `--region` flag to run `gcloud` commands against this cluster. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ gcloud container clusters create cockroachdb --machine-type n2-standard-4 --region {region-name} --num-nodes 1 - ~~~ - - ~~~ - Creating cluster cockroachdb...done. - ~~~ - - This creates GKE instances and joins them into a single Kubernetes cluster named `cockroachdb`. The `--region` flag specifies a [regional three-zone cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-regional-cluster), and `--num-nodes` specifies one Kubernetes worker node in each zone. - - The `--machine-type` flag tells the node pool to use the [`n2-standard-4`](https://cloud.google.com/compute/docs/machine-types#standard_machine_types) machine type (4 vCPUs, 16 GB memory), which meets our [recommended CPU and memory configuration](recommended-production-settings.html#basic-hardware-recommendations). - - The process can take a few minutes, so do not move on to the next step until you see a `Creating cluster cockroachdb...done` message and details about your cluster. - -3. Get the email address associated with your Google Cloud account: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ gcloud info | grep Account - ~~~ - - ~~~ - Account: [your.google.cloud.email@example.org] - ~~~ - - {{site.data.alerts.callout_danger}} - This command returns your email address in all lowercase. However, in the next step, you must enter the address using the accurate capitalization. For example, if your address is YourName@example.com, you must use YourName@example.com and not yourname@example.com. - {{site.data.alerts.end}} - -4. [Create the RBAC roles](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#prerequisites_for_using_role-based_access_control) CockroachDB needs for running on GKE, using the address from the previous step: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create clusterrolebinding $USER-cluster-admin-binding \ - --clusterrole=cluster-admin \ - --user={your.google.cloud.email@example.org} - ~~~ - - ~~~ - clusterrolebinding.rbac.authorization.k8s.io/your.username-cluster-admin-binding created - ~~~ - -### Hosted EKS - -1. Complete the steps described in the [EKS Getting Started](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html) documentation. - - This includes installing and configuring the AWS CLI and `eksctl`, which is the command-line tool used to create and delete Kubernetes clusters on EKS, and `kubectl`, which is the command-line tool used to manage Kubernetes from your workstation. - - {{site.data.alerts.callout_info}} - If you are running [EKS-Anywhere](https://aws.amazon.com/eks/eks-anywhere/), CockroachDB requires that you [configure your default storage class](https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/) to auto-provision persistent volumes. Alternatively, you can define a custom storage configuration as required by your install pattern. - {{site.data.alerts.end}} - -2. From your local workstation, start the Kubernetes cluster: - - {{site.data.alerts.callout_success}} - To ensure that all 3 nodes can be placed into a different availability zone, you may want to first [confirm that at least 3 zones are available in the region](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#availability-zones-describe) for your account. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ eksctl create cluster \ - --name cockroachdb \ - --nodegroup-name standard-workers \ - --node-type m5.xlarge \ - --nodes 3 \ - --nodes-min 1 \ - --nodes-max 4 \ - --node-ami auto - ~~~ - - This creates EKS instances and joins them into a single Kubernetes cluster named `cockroachdb`. The `--node-type` flag tells the node pool to use the [`m5.xlarge`](https://aws.amazon.com/ec2/instance-types/) instance type (4 vCPUs, 16 GB memory), which meets our [recommended CPU and memory configuration](recommended-production-settings.html#basic-hardware-recommendations). - - Cluster provisioning usually takes between 10 and 15 minutes. Do not move on to the next step until you see a message like `[✔] EKS cluster "cockroachdb" in "us-east-1" region is ready` and details about your cluster. - -3. Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation/home) to verify that the stacks `eksctl-cockroachdb-cluster` and `eksctl-cockroachdb-nodegroup-standard-workers` were successfully created. Be sure that your region is selected in the console. \ No newline at end of file diff --git a/src/current/_includes/v22.1/orchestration/test-cluster-insecure.md b/src/current/_includes/v22.1/orchestration/test-cluster-insecure.md deleted file mode 100644 index 285097f8e69..00000000000 --- a/src/current/_includes/v22.1/orchestration/test-cluster-insecure.md +++ /dev/null @@ -1,76 +0,0 @@ -1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it: - -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=cockroachdb-public - ~~~ - -
- -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ - -
- -2. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE bank; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE bank.accounts ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - balance DECIMAL - ); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO bank.accounts (balance) - VALUES - (1000.50), (20000), (380), (500), (55000); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT * FROM bank.accounts; - ~~~ - - ~~~ - id | balance - +--------------------------------------+---------+ - 6f123370-c48c-41ff-b384-2c185590af2b | 380 - 990c9148-1ea0-4861-9da7-fd0e65b0a7da | 1000.50 - ac31c671-40bf-4a7b-8bee-452cff8a4026 | 500 - d58afd93-5be9-42ba-b2e2-dc00dcedf409 | 20000 - e6d8f696-87f5-4d3c-a377-8e152fdc27f7 | 55000 - (5 rows) - ~~~ - -3. Exit the SQL shell and delete the temporary pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ diff --git a/src/current/_includes/v22.1/orchestration/test-cluster-secure.md b/src/current/_includes/v22.1/orchestration/test-cluster-secure.md deleted file mode 100644 index 8e72dd5b893..00000000000 --- a/src/current/_includes/v22.1/orchestration/test-cluster-secure.md +++ /dev/null @@ -1,144 +0,0 @@ -To use the CockroachDB SQL client, first launch a secure pod running the `cockroach` binary. - -
- -{% capture latest_operator_version %}{% include_cached latest_operator_version.md %}{% endcapture %} - -{% include_cached copy-clipboard.html %} -~~~ shell -$ kubectl create \ --f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/examples/client-secure-operator.yaml -~~~ - -1. Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - ~~~ - # Welcome to the CockroachDB SQL shell. - # All statements must be terminated by a semicolon. - # To exit, type: \q. - # - # Server version: CockroachDB CCL v21.1.0 (x86_64-unknown-linux-gnu, built 2021/04/23 13:54:57, go1.13.14) (same version as client) - # Cluster ID: a96791d9-998c-4683-a3d3-edbf425bbf11 - # - # Enter \? for a brief introduction. - # - root@cockroachdb-public:26257/defaultdb> - ~~~ - -{% include {{ page.version.version }}/orchestration/kubernetes-basic-sql.md %} -
- -
- -{% include_cached copy-clipboard.html %} -~~~ shell -$ kubectl create \ --f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/bring-your-own-certs/client.yaml -~~~ - -~~~ -pod/cockroachdb-client-secure created -~~~ - -1. Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - ~~~ - # Welcome to the cockroach SQL interface. - # All statements must be terminated by a semicolon. - # To exit: CTRL + D. - # - # Client version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) - # Server version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) - - # Cluster ID: 256a8705-e348-4e3a-ab12-e1aba96857e4 - # - # Enter \? for a brief introduction. - # - root@cockroachdb-public:26257/defaultdb> - ~~~ - - {{site.data.alerts.callout_success}} - This pod will continue running indefinitely, so any time you need to reopen the built-in SQL client or run any other [`cockroach` client commands](cockroach-commands.html) (e.g., `cockroach node`), repeat step 2 using the appropriate `cockroach` command. - - If you'd prefer to delete the pod and recreate it when needed, run `kubectl delete pod cockroachdb-client-secure`. - {{site.data.alerts.end}} - -{% include {{ page.version.version }}/orchestration/kubernetes-basic-sql.md %} -
- -
-From your local workstation, use our [`client-secure.yaml`](https://github.com/cockroachdb/helm-charts/blob/master/examples/client-secure.yaml) file to launch a pod and keep it running indefinitely. - -1. Download the file: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -OOOOOOOOO \ - https://raw.githubusercontent.com/cockroachdb/helm-charts/master/examples/client-secure.yaml - ~~~ - -1. In the file, set the following values: - - `spec.serviceAccountName: my-release-cockroachdb` - - `spec.image: cockroachdb/cockroach: {your CockroachDB version}` - - `spec.volumes[0].project.sources[0].secret.name: my-release-cockroachdb-client-secret` - -1. Use the file to launch a pod and keep it running indefinitely: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create -f client-secure.yaml - ~~~ - - ~~~ - pod "cockroachdb-client-secure" created - ~~~ - -1. Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=./cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - - ~~~ - # Welcome to the cockroach SQL interface. - # All statements must be terminated by a semicolon. - # To exit: CTRL + D. - # - # Client version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) - # Server version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) - - # Cluster ID: 256a8705-e348-4e3a-ab12-e1aba96857e4 - # - # Enter \? for a brief introduction. - # - root@my-release-cockroachdb-public:26257/defaultdb> - ~~~ - - {{site.data.alerts.callout_success}} - This pod will continue running indefinitely, so any time you need to reopen the built-in SQL client or run any other [`cockroach` client commands](cockroach-commands.html) (e.g., `cockroach node`), repeat step 2 using the appropriate `cockroach` command. - - If you'd prefer to delete the pod and recreate it when needed, run `kubectl delete pod cockroachdb-client-secure`. - {{site.data.alerts.end}} - -{% include {{ page.version.version }}/orchestration/kubernetes-basic-sql.md %} -
\ No newline at end of file diff --git a/src/current/_includes/v22.1/performance/alter-primary-key-hash-sharded.md b/src/current/_includes/v22.1/performance/alter-primary-key-hash-sharded.md deleted file mode 100644 index 7aac175286e..00000000000 --- a/src/current/_includes/v22.1/performance/alter-primary-key-hash-sharded.md +++ /dev/null @@ -1,66 +0,0 @@ -Let's assume the `events` table already exists: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE events ( - product_id INT8, - owner UUID, - serial_number VARCHAR, - event_id UUID, - ts TIMESTAMP, - data JSONB, - PRIMARY KEY (product_id, owner, serial_number, ts, event_id), - INDEX (ts) USING HASH -); -~~~ - -You can change an existing primary key to use hash sharding by adding the `USING HASH` clause at the end of the key definition: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE events ALTER PRIMARY KEY USING COLUMNS (product_id, owner, serial_number, ts, event_id) USING HASH; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW INDEX FROM events; -~~~ - -~~~ - table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit --------------+---------------+------------+--------------+-------------------------------------------------------------------+-----------+---------+----------- - events | events_pkey | false | 1 | crdb_internal_event_id_owner_product_id_serial_number_ts_shard_16 | ASC | false | true - events | events_pkey | false | 2 | product_id | ASC | false | false - events | events_pkey | false | 3 | owner | ASC | false | false - events | events_pkey | false | 4 | serial_number | ASC | false | false - events | events_pkey | false | 5 | ts | ASC | false | false - events | events_pkey | false | 6 | event_id | ASC | false | false - events | events_pkey | false | 7 | data | N/A | true | false - events | events_ts_idx | true | 1 | crdb_internal_ts_shard_16 | ASC | false | true - events | events_ts_idx | true | 2 | ts | ASC | false | false - events | events_ts_idx | true | 3 | crdb_internal_event_id_owner_product_id_serial_number_ts_shard_16 | ASC | false | true - events | events_ts_idx | true | 4 | product_id | ASC | false | true - events | events_ts_idx | true | 5 | owner | ASC | false | true - events | events_ts_idx | true | 6 | serial_number | ASC | false | true - events | events_ts_idx | true | 7 | event_id | ASC | false | true -(14 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM events; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden ---------------------------------------------------------------------+-----------+-------------+----------------+-----------------------------------------------------------------------------------------------+-----------------------------+------------ - product_id | INT8 | false | NULL | | {events_pkey,events_ts_idx} | false - owner | UUID | false | NULL | | {events_pkey,events_ts_idx} | false - serial_number | VARCHAR | false | NULL | | {events_pkey,events_ts_idx} | false - event_id | UUID | false | NULL | | {events_pkey,events_ts_idx} | false - ts | TIMESTAMP | false | NULL | | {events_pkey,events_ts_idx} | false - data | JSONB | true | NULL | | {events_pkey} | false - crdb_internal_ts_shard_16 | INT8 | false | NULL | mod(fnv32(crdb_internal.datums_to_bytes(ts)), 16) | {events_ts_idx} | true - crdb_internal_event_id_owner_product_id_serial_number_ts_shard_16 | INT8 | false | NULL | mod(fnv32(crdb_internal.datums_to_bytes(event_id, owner, product_id, serial_number, ts)), 16) | {events_pkey,events_ts_idx} | true -(8 rows) -~~~ diff --git a/src/current/_includes/v22.1/performance/check-rebalancing-after-partitioning.md b/src/current/_includes/v22.1/performance/check-rebalancing-after-partitioning.md deleted file mode 100644 index b26d29b8631..00000000000 --- a/src/current/_includes/v22.1/performance/check-rebalancing-after-partitioning.md +++ /dev/null @@ -1,41 +0,0 @@ -Over the next minutes, CockroachDB will rebalance all partitions based on the constraints you defined. - -To check this at a high level, access the Web UI on any node at `:8080` and look at the **Node List**. You'll see that the range count is still close to even across all nodes but much higher than before partitioning: - -Perf tuning rebalancing - -To check at a more granular level, SSH to one of the instances not running CockroachDB and run the `SHOW EXPERIMENTAL_RANGES` statement on the `vehicles` table: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql \ -{{page.certs}} \ ---host=
\ ---database=movr \ ---execute="SELECT * FROM \ -[SHOW EXPERIMENTAL_RANGES FROM TABLE vehicles] \ -WHERE \"start_key\" IS NOT NULL \ - AND \"start_key\" NOT LIKE '%Prefix%';" -~~~ - -~~~ - start_key | end_key | range_id | replicas | lease_holder -+------------------+----------------------------+----------+----------+--------------+ - /"boston" | /"boston"/PrefixEnd | 105 | {1,2,3} | 3 - /"los angeles" | /"los angeles"/PrefixEnd | 121 | {7,8,9} | 8 - /"new york" | /"new york"/PrefixEnd | 101 | {1,2,3} | 3 - /"san francisco" | /"san francisco"/PrefixEnd | 117 | {7,8,9} | 8 - /"seattle" | /"seattle"/PrefixEnd | 113 | {4,5,6} | 5 - /"washington dc" | /"washington dc"/PrefixEnd | 109 | {1,2,3} | 1 -(6 rows) -~~~ - -For reference, here's how the nodes map to zones: - -Node IDs | Zone ----------|----- -1-3 | `us-east1-b` (South Carolina) -4-6 | `us-west1-a` (Oregon) -7-9 | `us-west2-a` (Los Angeles) - -We can see that, after partitioning, the replicas for New York, Boston, and Washington DC are located on nodes 1-3 in `us-east1-b`, replicas for Seattle are located on nodes 4-6 in `us-west1-a`, and replicas for San Francisco and Los Angeles are located on nodes 7-9 in `us-west2-a`. diff --git a/src/current/_includes/v22.1/performance/check-rebalancing.md b/src/current/_includes/v22.1/performance/check-rebalancing.md deleted file mode 100644 index 3109150fdaf..00000000000 --- a/src/current/_includes/v22.1/performance/check-rebalancing.md +++ /dev/null @@ -1,33 +0,0 @@ -Since you started each node with the `--locality` flag set to its GCE zone, over the next minutes, CockroachDB will rebalance data evenly across the zones. - -To check this, access the DB Console on any node at `:8080` and look at the **Node List**. You'll see that the range count is more or less even across all nodes: - -Perf tuning rebalancing - -For reference, here's how the nodes map to zones: - -Node IDs | Zone ----------|----- -1-3 | `us-east1-b` (South Carolina) -4-6 | `us-west1-a` (Oregon) -7-9 | `us-west2-a` (Los Angeles) - -To verify even balancing at range level, SSH to one of the instances not running CockroachDB and run the `SHOW EXPERIMENTAL_RANGES` statement: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql \ -{{page.certs}} \ ---host=
\ ---database=movr \ ---execute="SHOW EXPERIMENTAL_RANGES FROM TABLE vehicles;" -~~~ - -~~~ - start_key | end_key | range_id | replicas | lease_holder -+-----------+---------+----------+----------+--------------+ - NULL | NULL | 33 | {3,4,7} | 7 -(1 row) -~~~ - -In this case, we can see that, for the single range containing `vehicles` data, one replica is in each zone, and the leaseholder is in the `us-west2-a` zone. diff --git a/src/current/_includes/v22.1/performance/configure-network.md b/src/current/_includes/v22.1/performance/configure-network.md deleted file mode 100644 index e9abeb94df3..00000000000 --- a/src/current/_includes/v22.1/performance/configure-network.md +++ /dev/null @@ -1,18 +0,0 @@ -CockroachDB requires TCP communication on two ports: - -- **26257** (`tcp:26257`) for inter-node communication (i.e., working as a cluster) -- **8080** (`tcp:8080`) for accessing the DB Console - -Since GCE instances communicate on their internal IP addresses by default, you do not need to take any action to enable inter-node communication. However, to access the DB Console from your local network, you must [create a firewall rule for your project](https://cloud.google.com/vpc/docs/using-firewalls): - -Field | Recommended Value -------|------------------ -Name | **cockroachweb** -Source filter | IP ranges -Source IP ranges | Your local network's IP ranges -Allowed protocols | **tcp:8080** -Target tags | `cockroachdb` - -{{site.data.alerts.callout_info}} -The **tag** feature will let you easily apply the rule to your instances. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/performance/contention-indicators.md b/src/current/_includes/v22.1/performance/contention-indicators.md deleted file mode 100644 index 41508c2310d..00000000000 --- a/src/current/_includes/v22.1/performance/contention-indicators.md +++ /dev/null @@ -1,4 +0,0 @@ -* Your application is experiencing degraded performance with transaction errors like `SQLSTATE: 40001`, `RETRY_WRITE_TOO_OLD`, and `RETRY_SERIALIZABLE`. See [Transaction Retry Error Reference](transaction-retry-error-reference.html). -* The [SQL Statement Contention graph](ui-sql-dashboard.html#sql-statement-contention) is showing spikes over time. -SQL Statement Contention graph in DB Console -* The [Transaction Restarts graph](ui-sql-dashboard.html) is showing spikes in retries over time. diff --git a/src/current/_includes/v22.1/performance/create-index-hash-sharded-secondary-index.md b/src/current/_includes/v22.1/performance/create-index-hash-sharded-secondary-index.md deleted file mode 100644 index 05f66896541..00000000000 --- a/src/current/_includes/v22.1/performance/create-index-hash-sharded-secondary-index.md +++ /dev/null @@ -1,62 +0,0 @@ -Let's assume the `events` table already exists: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE events ( - product_id INT8, - owner UUID, - serial_number VARCHAR, - event_id UUID, - ts TIMESTAMP, - data JSONB, - PRIMARY KEY (product_id, owner, serial_number, ts, event_id) -); -~~~ - -You can create a hash-sharded index on an existing table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE INDEX ON events(ts) USING HASH; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW INDEX FROM events; -~~~ - -~~~ - table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit --------------+---------------+------------+--------------+---------------------------+-----------+---------+----------- - events | events_pkey | false | 1 | product_id | ASC | false | false - events | events_pkey | false | 2 | owner | ASC | false | false - events | events_pkey | false | 3 | serial_number | ASC | false | false - events | events_pkey | false | 4 | ts | ASC | false | false - events | events_pkey | false | 5 | event_id | ASC | false | false - events | events_pkey | false | 6 | data | N/A | true | false - events | events_ts_idx | true | 1 | crdb_internal_ts_shard_16 | ASC | false | true - events | events_ts_idx | true | 2 | ts | ASC | false | false - events | events_ts_idx | true | 3 | product_id | ASC | false | true - events | events_ts_idx | true | 4 | owner | ASC | false | true - events | events_ts_idx | true | 5 | serial_number | ASC | false | true - events | events_ts_idx | true | 6 | event_id | ASC | false | true -(12 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM events; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden -----------------------------+-----------+-------------+----------------+---------------------------------------------------+-----------------------------+------------ - product_id | INT8 | false | NULL | | {events_pkey,events_ts_idx} | false - owner | UUID | false | NULL | | {events_pkey,events_ts_idx} | false - serial_number | VARCHAR | false | NULL | | {events_pkey,events_ts_idx} | false - event_id | UUID | false | NULL | | {events_pkey,events_ts_idx} | false - ts | TIMESTAMP | false | NULL | | {events_pkey,events_ts_idx} | false - data | JSONB | true | NULL | | {events_pkey} | false - crdb_internal_ts_shard_16 | INT8 | false | NULL | mod(fnv32(crdb_internal.datums_to_bytes(ts)), 16) | {events_ts_idx} | true -(7 rows) -~~~ diff --git a/src/current/_includes/v22.1/performance/create-table-hash-sharded-primary-index.md b/src/current/_includes/v22.1/performance/create-table-hash-sharded-primary-index.md deleted file mode 100644 index 40ba79a096a..00000000000 --- a/src/current/_includes/v22.1/performance/create-table-hash-sharded-primary-index.md +++ /dev/null @@ -1,37 +0,0 @@ -Let's create the `products` table and add a hash-sharded primary key on the `ts` column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE products ( - ts DECIMAL PRIMARY KEY USING HASH, - product_id INT8 - ); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW INDEX FROM products; -~~~ - -~~~ - table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit --------------+---------------+------------+--------------+---------------------------+-----------+---------+----------- - products | products_pkey | false | 1 | crdb_internal_ts_shard_16 | ASC | false | true - products | products_pkey | false | 2 | ts | ASC | false | false - products | products_pkey | false | 3 | product_id | N/A | true | false -(3 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM products; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden -----------------------------+-----------+-------------+----------------+---------------------------------------------------+-----------------+------------ - crdb_internal_ts_shard_16 | INT8 | false | NULL | mod(fnv32(crdb_internal.datums_to_bytes(ts)), 16) | {products_pkey} | true - ts | DECIMAL | false | NULL | | {products_pkey} | false - product_id | INT8 | true | NULL | | {products_pkey} | false -(3 rows) -~~~ diff --git a/src/current/_includes/v22.1/performance/create-table-hash-sharded-secondary-index.md b/src/current/_includes/v22.1/performance/create-table-hash-sharded-secondary-index.md deleted file mode 100644 index dc0e164a0fb..00000000000 --- a/src/current/_includes/v22.1/performance/create-table-hash-sharded-secondary-index.md +++ /dev/null @@ -1,56 +0,0 @@ -Let's now create the `events` table and add a secondary index on the `ts` column in a single statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE events ( - product_id INT8, - owner UUID, - serial_number VARCHAR, - event_id UUID, - ts TIMESTAMP, - data JSONB, - PRIMARY KEY (product_id, owner, serial_number, ts, event_id), - INDEX (ts) USING HASH -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW INDEX FROM events; -~~~ - -~~~ - table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit --------------+---------------+------------+--------------+---------------------------+-----------+---------+----------- - events | events_pkey | false | 1 | product_id | ASC | false | false - events | events_pkey | false | 2 | owner | ASC | false | false - events | events_pkey | false | 3 | serial_number | ASC | false | false - events | events_pkey | false | 4 | ts | ASC | false | false - events | events_pkey | false | 5 | event_id | ASC | false | false - events | events_pkey | false | 6 | data | N/A | true | false - events | events_ts_idx | true | 1 | crdb_internal_ts_shard_16 | ASC | false | true - events | events_ts_idx | true | 2 | ts | ASC | false | false - events | events_ts_idx | true | 3 | product_id | ASC | false | true - events | events_ts_idx | true | 4 | owner | ASC | false | true - events | events_ts_idx | true | 5 | serial_number | ASC | false | true - events | events_ts_idx | true | 6 | event_id | ASC | false | true -(12 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM events; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden -----------------------------+-----------+-------------+----------------+---------------------------------------------------+-----------------------------+------------ - product_id | INT8 | false | NULL | | {events_pkey,events_ts_idx} | false - owner | UUID | false | NULL | | {events_pkey,events_ts_idx} | false - serial_number | VARCHAR | false | NULL | | {events_pkey,events_ts_idx} | false - event_id | UUID | false | NULL | | {events_pkey,events_ts_idx} | false - ts | TIMESTAMP | false | NULL | | {events_pkey,events_ts_idx} | false - data | JSONB | true | NULL | | {events_pkey} | false - crdb_internal_ts_shard_16 | INT8 | false | NULL | mod(fnv32(crdb_internal.datums_to_bytes(ts)), 16) | {events_ts_idx} | true -(7 rows) -~~~ diff --git a/src/current/_includes/v22.1/performance/import-movr.md b/src/current/_includes/v22.1/performance/import-movr.md deleted file mode 100644 index c61a32f64ce..00000000000 --- a/src/current/_includes/v22.1/performance/import-movr.md +++ /dev/null @@ -1,160 +0,0 @@ -Now you'll import Movr data representing users, vehicles, and rides in 3 eastern US cities (New York, Boston, and Washington DC) and 3 western US cities (Los Angeles, San Francisco, and Seattle). - -1. Still on the fourth instance, start the [built-in SQL shell](cockroach-sql.html), pointing it at one of the CockroachDB nodes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql {{page.certs}} --host=
- ~~~ - -2. Create the `movr` database and set it as the default: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE movr; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET DATABASE = movr; - ~~~ - -3. Use the [`IMPORT`](import.html) statement to create and populate the `users`, `vehicles,` and `rides` tables: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > IMPORT TABLE users ( - id UUID NOT NULL, - city STRING NOT NULL, - name STRING NULL, - address STRING NULL, - credit_card STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC) - ) - CSV DATA ( - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/users/n1.0.csv' - ); - ~~~ - - ~~~ - job_id | status | fraction_completed | rows | index_entries | system_records | bytes - +--------------------+-----------+--------------------+------+---------------+----------------+--------+ - 390345990764396545 | succeeded | 1 | 1998 | 0 | 0 | 241052 - (1 row) - - Time: 2.882582355s - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > IMPORT TABLE vehicles ( - id UUID NOT NULL, - city STRING NOT NULL, - type STRING NULL, - owner_id UUID NULL, - creation_time TIMESTAMP NULL, - status STRING NULL, - ext JSON NULL, - mycol STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - INDEX vehicles_auto_index_fk_city_ref_users (city ASC, owner_id ASC) - ) - CSV DATA ( - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/vehicles/n1.0.csv' - ); - ~~~ - - ~~~ - job_id | status | fraction_completed | rows | index_entries | system_records | bytes - +--------------------+-----------+--------------------+-------+---------------+----------------+---------+ - 390346109887250433 | succeeded | 1 | 19998 | 19998 | 0 | 3558767 - (1 row) - - Time: 5.803841493s - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > IMPORT TABLE rides ( - id UUID NOT NULL, - city STRING NOT NULL, - vehicle_city STRING NULL, - rider_id UUID NULL, - vehicle_id UUID NULL, - start_address STRING NULL, - end_address STRING NULL, - start_time TIMESTAMP NULL, - end_time TIMESTAMP NULL, - revenue DECIMAL(10,2) NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - INDEX rides_auto_index_fk_city_ref_users (city ASC, rider_id ASC), - INDEX rides_auto_index_fk_vehicle_city_ref_vehicles (vehicle_city ASC, vehicle_id ASC), - CONSTRAINT check_vehicle_city_city CHECK (vehicle_city = city) - ) - CSV DATA ( - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.0.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.1.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.2.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.3.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.4.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.5.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.6.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.7.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.8.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.9.csv' - ); - ~~~ - - ~~~ - job_id | status | fraction_completed | rows | index_entries | system_records | bytes - +--------------------+-----------+--------------------+--------+---------------+----------------+-----------+ - 390346325693792257 | succeeded | 1 | 999996 | 1999992 | 0 | 339741841 - (1 row) - - Time: 44.620371424s - ~~~ - - {{site.data.alerts.callout_success}} - You can observe the progress of imports as well as all schema change operations (e.g., adding secondary indexes) on the [**Jobs** page](ui-jobs-page.html) of the DB Console. - {{site.data.alerts.end}} - -7. Logically, there should be a number of [foreign key](foreign-key.html) relationships between the tables: - - Referencing columns | Referenced columns - --------------------|------------------- - `vehicles.city`, `vehicles.owner_id` | `users.city`, `users.id` - `rides.city`, `rides.rider_id` | `users.city`, `users.id` - `rides.vehicle_city`, `rides.vehicle_id` | `vehicles.city`, `vehicles.id` - - As mentioned earlier, it wasn't possible to put these relationships in place during `IMPORT`, but it was possible to create the required secondary indexes. Now, let's add the foreign key constraints: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER TABLE vehicles - ADD CONSTRAINT fk_city_ref_users - FOREIGN KEY (city, owner_id) - REFERENCES users (city, id); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER TABLE rides - ADD CONSTRAINT fk_city_ref_users - FOREIGN KEY (city, rider_id) - REFERENCES users (city, id); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER TABLE rides - ADD CONSTRAINT fk_vehicle_city_ref_vehicles - FOREIGN KEY (vehicle_city, vehicle_id) - REFERENCES vehicles (city, id); - ~~~ - -4. Exit the built-in SQL shell: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ diff --git a/src/current/_includes/v22.1/performance/lease-preference-system-database.md b/src/current/_includes/v22.1/performance/lease-preference-system-database.md deleted file mode 100644 index 23c4376fbf0..00000000000 --- a/src/current/_includes/v22.1/performance/lease-preference-system-database.md +++ /dev/null @@ -1,10 +0,0 @@ -To reduce latency while making {% if page.name == "online-schema-changes.md" %}online schema changes{% else %}[online schema changes](online-schema-changes.html){% endif %}, we recommend specifying a `lease_preference` [zone configuration](configure-replication-zones.html) on the `system` database to a single region and running all subsequent schema changes from a node within that region. For example, if the majority of online schema changes come from machines that are geographically close to `us-east1`, run the following: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER DATABASE system CONFIGURE ZONE USING constraints = '{"+region=us-east1": 1}', lease_preferences = '[[+region=us-east1]]'; -~~~ - -Run all subsequent schema changes from a node in the specified region. - -If you do not intend to run more schema changes from that region, you can safely remove the lease preference from the zone configuration for the system database. diff --git a/src/current/_includes/v22.1/performance/overview.md b/src/current/_includes/v22.1/performance/overview.md deleted file mode 100644 index 195f8ee330f..00000000000 --- a/src/current/_includes/v22.1/performance/overview.md +++ /dev/null @@ -1,38 +0,0 @@ -### Topology - -You'll start with a 3-node CockroachDB cluster in a single Google Compute Engine (GCE) zone, with an extra instance for running a client application workload: - -Perf tuning topology - -{{site.data.alerts.callout_info}} -Within a single GCE zone, network latency between instances should be sub-millisecond. -{{site.data.alerts.end}} - -You'll then scale the cluster to 9 nodes running across 3 GCE regions, with an extra instance in each region for a client application workload: - -Perf tuning topology - -{{site.data.alerts.callout_info}} -Network latencies will increase with geographic distance between nodes. You can observe this in the [Network Latency page](ui-network-latency-page.html) of the DB Console. -{{site.data.alerts.end}} - -To reproduce the performance demonstrated in this tutorial: - -- For each CockroachDB node, you'll use the [`n2-standard-4`](https://cloud.google.com/compute/docs/machine-types#standard_machine_types) machine type (4 vCPUs, 16 GB memory) with the Ubuntu 16.04 OS image and a [local SSD](https://cloud.google.com/compute/docs/disks/#localssds) disk. -- For running the client application workload, you'll use smaller instances, such as `n2-standard-2`. - -### Schema - -Your schema and data will be based on our open-source, fictional peer-to-peer vehicle-sharing application, [MovR](movr.html). - -Perf tuning schema - -A few notes about the schema: - -- There are just three self-explanatory tables: In essence, `users` represents the people registered for the service, `vehicles` represents the pool of vehicles for the service, and `rides` represents when and where users have participated. -- Each table has a composite primary key, with `city` being first in the key. Although not necessary initially in the single-region deployment, once you scale the cluster to multiple regions, these compound primary keys will enable you to [geo-partition data at the row level](partitioning.html#partition-using-primary-key) by `city`. As such, this tutorial demonstrates a schema designed for future scaling. -- The [`IMPORT`](import.html) feature you'll use to import the data does not support foreign keys, so you'll import the data without [foreign key constraints](foreign-key.html). However, the import will create the secondary indexes required to add the foreign keys later. - -### Important concepts - -To understand the techniques in this tutorial, and to be able to apply them in your own scenarios, it's important to first understand [how reads and writes work in CockroachDB](architecture/reads-and-writes-overview.html). Review that document before getting started here. diff --git a/src/current/_includes/v22.1/performance/partition-by-city.md b/src/current/_includes/v22.1/performance/partition-by-city.md deleted file mode 100644 index 2634a204d33..00000000000 --- a/src/current/_includes/v22.1/performance/partition-by-city.md +++ /dev/null @@ -1,419 +0,0 @@ -For this service, the most effective technique for improving read and write latency is to [geo-partition](partitioning.html) the data by city. In essence, this means changing the way data is mapped to ranges. Instead of an entire table and its indexes mapping to a specific range or set of ranges, all rows in the table and its indexes with a given city will map to a range or set of ranges. Once ranges are defined in this way, we can then use the [replication zone](configure-replication-zones.html) feature to pin partitions to specific locations, ensuring that read and write requests from users in a specific city do not have to leave that region. - -1. Partitioning is an enterprise feature, so start off by [registering for a 30-day trial license](https://www.cockroachlabs.com/get-cockroachdb/enterprise/). - -2. Once you've received the trial license, SSH to any node in your cluster and [apply the license](licensing-faqs.html#set-a-license): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --host=
\ - --execute="SET CLUSTER SETTING cluster.organization = '';" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --host=
\ - --execute="SET CLUSTER SETTING enterprise.license = '';" - ~~~ - -3. Define partitions for all tables and their secondary indexes. - - Start with the `users` table: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="ALTER TABLE users \ - PARTITION BY LIST (city) ( \ - PARTITION new_york VALUES IN ('new york'), \ - PARTITION boston VALUES IN ('boston'), \ - PARTITION washington_dc VALUES IN ('washington dc'), \ - PARTITION seattle VALUES IN ('seattle'), \ - PARTITION san_francisco VALUES IN ('san francisco'), \ - PARTITION los_angeles VALUES IN ('los angeles') \ - );" - ~~~ - - Now define partitions for the `vehicles` table and its secondary indexes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="ALTER TABLE vehicles \ - PARTITION BY LIST (city) ( \ - PARTITION new_york VALUES IN ('new york'), \ - PARTITION boston VALUES IN ('boston'), \ - PARTITION washington_dc VALUES IN ('washington dc'), \ - PARTITION seattle VALUES IN ('seattle'), \ - PARTITION san_francisco VALUES IN ('san francisco'), \ - PARTITION los_angeles VALUES IN ('los angeles') \ - );" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="ALTER INDEX vehicles_auto_index_fk_city_ref_users \ - PARTITION BY LIST (city) ( \ - PARTITION new_york VALUES IN ('new york'), \ - PARTITION boston VALUES IN ('boston'), \ - PARTITION washington_dc VALUES IN ('washington dc'), \ - PARTITION seattle VALUES IN ('seattle'), \ - PARTITION san_francisco VALUES IN ('san francisco'), \ - PARTITION los_angeles VALUES IN ('los angeles') \ - );" - ~~~ - - Next, define partitions for the `rides` table and its secondary indexes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="ALTER TABLE rides \ - PARTITION BY LIST (city) ( \ - PARTITION new_york VALUES IN ('new york'), \ - PARTITION boston VALUES IN ('boston'), \ - PARTITION washington_dc VALUES IN ('washington dc'), \ - PARTITION seattle VALUES IN ('seattle'), \ - PARTITION san_francisco VALUES IN ('san francisco'), \ - PARTITION los_angeles VALUES IN ('los angeles') \ - );" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="ALTER INDEX rides_auto_index_fk_city_ref_users \ - PARTITION BY LIST (city) ( \ - PARTITION new_york VALUES IN ('new york'), \ - PARTITION boston VALUES IN ('boston'), \ - PARTITION washington_dc VALUES IN ('washington dc'), \ - PARTITION seattle VALUES IN ('seattle'), \ - PARTITION san_francisco VALUES IN ('san francisco'), \ - PARTITION los_angeles VALUES IN ('los angeles') \ - );" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="ALTER INDEX rides_auto_index_fk_vehicle_city_ref_vehicles \ - PARTITION BY LIST (vehicle_city) ( \ - PARTITION new_york VALUES IN ('new york'), \ - PARTITION boston VALUES IN ('boston'), \ - PARTITION washington_dc VALUES IN ('washington dc'), \ - PARTITION seattle VALUES IN ('seattle'), \ - PARTITION san_francisco VALUES IN ('san francisco'), \ - PARTITION los_angeles VALUES IN ('los angeles') \ - );" - ~~~ - - Finally, drop an unused index on `rides` rather than partition it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="DROP INDEX rides_start_time_idx;" - ~~~ - - {{site.data.alerts.callout_info}} - The `rides` table contains 1 million rows, so dropping this index will take a few minutes. - {{site.data.alerts.end}} - -7. Now [create replication zones](configure-replication-zones.html#create-a-replication-zone-for-a-partition) to require city data to be stored on specific nodes based on node locality. - - City | Locality - -----|--------- - New York | `zone=us-east1-b` - Boston | `zone=us-east1-b` - Washington DC | `zone=us-east1-b` - Seattle | `zone=us-west1-a` - San Francisco | `zone=us-west2-a` - Los Angeles | `zone=us-west2-a` - - {{site.data.alerts.callout_info}} - Since our nodes are located in 3 specific GCE zones, we're only going to use the `zone=` portion of node locality. If we were using multiple zones per regions, we would likely use the `region=` portion of the node locality instead. - {{site.data.alerts.end}} - - Start with the `users` table partitions: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - Move on to the `vehicles` table and secondary index partitions: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION new_york OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION boston OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION washington_dc OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION seattle OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION san_francisco OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION los_angeles OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - Finish with the `rides` table and secondary index partitions: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION new_york OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION new_york OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION boston OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION boston OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION washington_dc OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION washington_dc OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION seattle OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION seattle OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION san_francisco OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION san_francisco OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION los_angeles OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION los_angeles OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ diff --git a/src/current/_includes/v22.1/performance/scale-cluster.md b/src/current/_includes/v22.1/performance/scale-cluster.md deleted file mode 100644 index 6c368d663de..00000000000 --- a/src/current/_includes/v22.1/performance/scale-cluster.md +++ /dev/null @@ -1,61 +0,0 @@ -1. SSH to one of the `n2-standard-4` instances in the `us-west1-a` zone. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - -3. Run the [`cockroach start`](cockroach-start.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - {{page.certs}} \ - --advertise-host= \ - --join= \ - --locality=cloud=gce,region=us-west1,zone=us-west1-a \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -4. Repeat steps 1 - 3 for the other two `n2-standard-4` instances in the `us-west1-a` zone. - -5. SSH to one of the `n2-standard-4` instances in the `us-west2-a` zone. - -6. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - -7. Run the [`cockroach start`](cockroach-start.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - {{page.certs}} \ - --advertise-host= \ - --join= \ - --locality=cloud=gce,region=us-west2,zone=us-west2-a \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -8. Repeat steps 5 - 7 for the other two `n2-standard-4` instances in the `us-west2-a` zone. diff --git a/src/current/_includes/v22.1/performance/start-cluster.md b/src/current/_includes/v22.1/performance/start-cluster.md deleted file mode 100644 index ee1d71149a7..00000000000 --- a/src/current/_includes/v22.1/performance/start-cluster.md +++ /dev/null @@ -1,60 +0,0 @@ -#### Start the nodes - -1. SSH to the first `n2-standard-4` instance. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - -3. Run the [`cockroach start`](cockroach-start.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - {{page.certs}} \ - --advertise-host= \ - --join=:26257,:26257,:26257 \ - --locality=cloud=gce,region=us-east1,zone=us-east1-b \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -4. Repeat steps 1 - 3 for the other two `n2-standard-4` instances. Be sure to adjust the `--advertise-addr` flag each time. - -#### Initialize the cluster - -1. SSH to the fourth instance, the one not running a CockroachDB node. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - -4. Run the [`cockroach init`](cockroach-init.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach init {{page.certs}} --host=
- ~~~ - - Each node then prints helpful details to the [standard output](cockroach-start.html#standard-output), such as the CockroachDB version, the URL for the DB Console, and the SQL URL for clients. diff --git a/src/current/_includes/v22.1/performance/statement-contention.md b/src/current/_includes/v22.1/performance/statement-contention.md deleted file mode 100644 index 059d05ea2c4..00000000000 --- a/src/current/_includes/v22.1/performance/statement-contention.md +++ /dev/null @@ -1,14 +0,0 @@ -Find the transactions and statements within the transactions that are experiencing [contention]({{ link_prefix }}performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention). CockroachDB has several tools to help you track down such transactions and statements: - -* In DB Console, visit the [Transactions](ui-transactions-page.html) and [Statements](ui-statements-page.html) pages and sort transactions and statements by contention. -* Query the following tables: - - - [`crdb_internal.cluster_contended_indexes`](crdb-internal.html#cluster_contended_indexes) and [`crdb_internal.cluster_contended_tables`](crdb-internal.html#cluster_contended_tables) tables for your database to find the indexes and tables that are experiencing contention. - - [`crdb_internal.cluster_locks`](crdb-internal.html#cluster_locks) to find out which transactions are holding locks on which objects. - - [`crdb_internal.cluster_contention_events`](crdb-internal.html#view-the-tables-indexes-with-the-most-time-under-contention) to view the tables/indexes with the most time under contention. - -After you identify the transactions or statements that are causing contention, follow the steps in the next section [to avoid contention](performance-best-practices-overview.html#avoid-transaction-contention). - -{{site.data.alerts.callout_info}} -If you experience a hanging or stuck query that is not showing up in the list of contended transactions and statements on the [Transactions](ui-transactions-page.html) or [Statements](ui-statements-page.html) pages in the DB Console, the process described above will not work. You will need to follow the process described in [Hanging or stuck queries](query-behavior-troubleshooting.html#hanging-or-stuck-queries) instead. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/performance/test-performance-after-partitioning.md b/src/current/_includes/v22.1/performance/test-performance-after-partitioning.md deleted file mode 100644 index 9754f6d9cd1..00000000000 --- a/src/current/_includes/v22.1/performance/test-performance-after-partitioning.md +++ /dev/null @@ -1,93 +0,0 @@ -After partitioning, reads and writers for a specific city will be much faster because all replicas for that city are now located on the nodes closest to the city. - -To check this, let's repeat a few of the read and write queries that we executed before partitioning in [step 12](#step-12-test-performance). - -#### Reads - -Again imagine we are a Movr administrator in New York, and we want to get the IDs and descriptions of all New York-based bikes that are currently in use: - -1. SSH to the instance in `us-east1-b` with the Python client. - -2. Query for the data: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ {{page.app}} \ - --host=
\ - --statement="SELECT id, ext FROM vehicles \ - WHERE city = 'new york' \ - AND type = 'bike' \ - AND status = 'in_use'" \ - --repeat=50 \ - --times - ~~~ - - ~~~ - Result: - ['id', 'ext'] - ['0068ee24-2dfb-437d-9a5d-22bb742d519e', "{u'color': u'green', u'brand': u'Kona'}"] - ['01b80764-283b-4232-8961-a8d6a4121a08', "{u'color': u'green', u'brand': u'Pinarello'}"] - ['02a39628-a911-4450-b8c0-237865546f7f', "{u'color': u'black', u'brand': u'Schwinn'}"] - ['02eb2a12-f465-4575-85f8-a4b77be14c54', "{u'color': u'black', u'brand': u'Pinarello'}"] - ['02f2fcc3-fea6-4849-a3a0-dc60480fa6c2', "{u'color': u'red', u'brand': u'FujiCervelo'}"] - ['034d42cf-741f-428c-bbbb-e31820c68588', "{u'color': u'yellow', u'brand': u'Santa Cruz'}"] - ... - - Times (milliseconds): - [20.065784454345703, 7.866144180297852, 8.362054824829102, 9.08803939819336, 7.925987243652344, 7.543087005615234, 7.786035537719727, 8.227825164794922, 7.907867431640625, 7.654905319213867, 7.793903350830078, 7.627964019775391, 7.833957672119141, 7.858037948608398, 7.474184036254883, 9.459972381591797, 7.726192474365234, 7.194995880126953, 7.364034652709961, 7.25102424621582, 7.650852203369141, 7.663965225219727, 9.334087371826172, 7.810115814208984, 7.543087005615234, 7.134914398193359, 7.922887802124023, 7.220029830932617, 7.606029510498047, 7.208108901977539, 7.333993911743164, 7.464170455932617, 7.679939270019531, 7.436990737915039, 7.62486457824707, 7.235050201416016, 7.420063018798828, 7.795095443725586, 7.39598274230957, 7.546901702880859, 7.582187652587891, 7.9669952392578125, 7.418155670166016, 7.539033889770508, 7.805109024047852, 7.086992263793945, 7.069826126098633, 7.833957672119141, 7.43412971496582, 7.035017013549805] - - Median time (milliseconds): - 7.62641429901 - ~~~ - -Before partitioning, this query took a median time of 72.02ms. After partitioning, the query took a median time of only 7.62ms. - -#### Writes - -Now let's again imagine 100 people in New York and 100 people in Seattle and 100 people in New York want to create new Movr accounts: - -1. SSH to the instance in `us-west1-a` with the Python client. - -2. Create 100 Seattle-based users: - - {% include_cached copy-clipboard.html %} - ~~~ shell - {{page.app}} \ - --host=
\ - --statement="INSERT INTO users VALUES (gen_random_uuid(), 'seattle', 'Seatller', '111 East Street', '1736352379937347')" \ - --repeat=100 \ - --times - ~~~ - - ~~~ - Times (milliseconds): - [41.8248176574707, 9.701967239379883, 8.725166320800781, 9.058952331542969, 7.819175720214844, 6.247997283935547, 10.265827178955078, 7.627964019775391, 9.120941162109375, 7.977008819580078, 9.247064590454102, 8.929967880249023, 9.610176086425781, 14.40286636352539, 8.588075637817383, 8.67319107055664, 9.417057037353516, 7.652044296264648, 8.917093276977539, 9.135961532592773, 8.604049682617188, 9.220123291015625, 7.578134536743164, 9.096860885620117, 8.942842483520508, 8.63790512084961, 7.722139358520508, 13.59701156616211, 9.176015853881836, 11.484146118164062, 9.212017059326172, 7.563114166259766, 8.793115615844727, 8.80289077758789, 7.827043533325195, 7.6389312744140625, 17.47584342956543, 9.436845779418945, 7.63392448425293, 8.594989776611328, 9.002208709716797, 8.93402099609375, 8.71896743774414, 8.76307487487793, 8.156061172485352, 8.729934692382812, 8.738040924072266, 8.25190544128418, 8.971929550170898, 7.460832595825195, 8.889198303222656, 8.45789909362793, 8.761167526245117, 10.223865509033203, 8.892059326171875, 8.961915969848633, 8.968114852905273, 7.750988006591797, 7.761955261230469, 9.199142456054688, 9.02700424194336, 9.509086608886719, 9.428977966308594, 7.902860641479492, 8.940935134887695, 8.615970611572266, 8.75401496887207, 7.906913757324219, 8.179187774658203, 11.447906494140625, 8.71419906616211, 9.202003479003906, 9.263038635253906, 9.089946746826172, 8.92496109008789, 10.32114028930664, 7.913827896118164, 9.464025497436523, 10.612010955810547, 8.78596305847168, 8.878946304321289, 7.575035095214844, 10.657072067260742, 8.777856826782227, 8.649110794067383, 9.012937545776367, 8.931875228881836, 9.31406021118164, 9.396076202392578, 8.908987045288086, 8.002996444702148, 9.089946746826172, 7.5588226318359375, 8.918046951293945, 12.117862701416016, 7.266998291015625, 8.074045181274414, 8.955001831054688, 8.868932723999023, 8.755922317504883] - - Median time (milliseconds): - 8.90052318573 - ~~~ - - Before partitioning, this query took a median time of 48.40ms. After partitioning, the query took a median time of only 8.90ms. - -3. SSH to the instance in `us-east1-b` with the Python client. - -4. Create 100 new NY-based users: - - {% include_cached copy-clipboard.html %} - ~~~ shell - {{page.app}} \ - --host=
\ - --statement="INSERT INTO users VALUES (gen_random_uuid(), 'new york', 'New Yorker', '111 West Street', '9822222379937347')" \ - --repeat=100 \ - --times - ~~~ - - ~~~ - Times (milliseconds): - [276.3068675994873, 9.830951690673828, 8.772134780883789, 9.304046630859375, 8.24880599975586, 7.959842681884766, 7.848978042602539, 7.879018783569336, 7.754087448120117, 10.724067687988281, 13.960123062133789, 9.825944900512695, 9.60993766784668, 9.273052215576172, 9.41920280456543, 8.040904998779297, 16.484975814819336, 10.178089141845703, 8.322000503540039, 9.468793869018555, 8.002042770385742, 9.185075759887695, 9.54294204711914, 9.387016296386719, 9.676933288574219, 13.051986694335938, 9.506940841674805, 12.327909469604492, 10.377168655395508, 15.023946762084961, 9.985923767089844, 7.853031158447266, 9.43303108215332, 9.164094924926758, 10.941028594970703, 9.37199592590332, 12.359857559204102, 8.975028991699219, 7.728099822998047, 8.310079574584961, 9.792089462280273, 9.448051452636719, 8.057117462158203, 9.37795639038086, 9.753942489624023, 9.576082229614258, 8.192062377929688, 9.392023086547852, 7.97581672668457, 8.165121078491211, 9.660959243774414, 8.270978927612305, 9.901046752929688, 8.085966110229492, 10.581016540527344, 9.831905364990234, 7.883787155151367, 8.077859878540039, 8.161067962646484, 10.02812385559082, 7.9898834228515625, 9.840965270996094, 9.452104568481445, 9.747028350830078, 9.003162384033203, 9.206056594848633, 9.274005889892578, 7.8449249267578125, 8.827924728393555, 9.322881698608398, 12.08186149597168, 8.76307487487793, 8.353948593139648, 8.182048797607422, 7.736921310424805, 9.31406021118164, 9.263992309570312, 9.282112121582031, 7.823944091796875, 9.11712646484375, 8.099079132080078, 9.156942367553711, 8.363962173461914, 10.974884033203125, 8.729934692382812, 9.2620849609375, 9.27591323852539, 8.272886276245117, 8.25190544128418, 8.093118667602539, 9.259939193725586, 8.413076400756836, 8.198976516723633, 9.95182991027832, 8.024930953979492, 8.895158767700195, 8.243083953857422, 9.076833724975586, 9.994029998779297, 10.149955749511719] - - Median time (milliseconds): - 9.26303863525 - ~~~ - - Before partitioning, this query took a median time of 116.86ms. After partitioning, the query took a median time of only 9.26ms. diff --git a/src/current/_includes/v22.1/performance/test-performance.md b/src/current/_includes/v22.1/performance/test-performance.md deleted file mode 100644 index 018dbd902ab..00000000000 --- a/src/current/_includes/v22.1/performance/test-performance.md +++ /dev/null @@ -1,146 +0,0 @@ -In general, all of the tuning techniques featured in the single-region scenario above still apply in a multi-region deployment. However, the fact that data and leaseholders are spread across the US means greater latencies in many cases. - -#### Reads - -For example, imagine we are a Movr administrator in New York, and we want to get the IDs and descriptions of all New York-based bikes that are currently in use: - -1. SSH to the instance in `us-east1-b` with the Python client. - -2. Query for the data: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ {{page.app}} \ - --host=
\ - --statement="SELECT id, ext FROM vehicles \ - WHERE city = 'new york' \ - AND type = 'bike' \ - AND status = 'in_use'" \ - --repeat=50 \ - --times - ~~~ - - ~~~ - Result: - ['id', 'ext'] - ['0068ee24-2dfb-437d-9a5d-22bb742d519e', "{u'color': u'green', u'brand': u'Kona'}"] - ['01b80764-283b-4232-8961-a8d6a4121a08', "{u'color': u'green', u'brand': u'Pinarello'}"] - ['02a39628-a911-4450-b8c0-237865546f7f', "{u'color': u'black', u'brand': u'Schwinn'}"] - ['02eb2a12-f465-4575-85f8-a4b77be14c54', "{u'color': u'black', u'brand': u'Pinarello'}"] - ['02f2fcc3-fea6-4849-a3a0-dc60480fa6c2', "{u'color': u'red', u'brand': u'FujiCervelo'}"] - ['034d42cf-741f-428c-bbbb-e31820c68588', "{u'color': u'yellow', u'brand': u'Santa Cruz'}"] - ... - - Times (milliseconds): - [933.8209629058838, 72.02410697937012, 72.45206832885742, 72.39294052124023, 72.8158950805664, 72.07584381103516, 72.21412658691406, 71.96712493896484, 71.75517082214355, 72.16811180114746, 71.78592681884766, 72.91603088378906, 71.91109657287598, 71.4719295501709, 72.40676879882812, 71.8080997467041, 71.84004783630371, 71.98500633239746, 72.40891456604004, 73.75001907348633, 71.45905494689941, 71.53081893920898, 71.46596908569336, 72.07608222961426, 71.94995880126953, 71.41804695129395, 71.29096984863281, 72.11899757385254, 71.63381576538086, 71.3050365447998, 71.83194160461426, 71.20394706726074, 70.9981918334961, 72.79205322265625, 72.63493537902832, 72.15285301208496, 71.8698501586914, 72.30591773986816, 71.53582572937012, 72.69001007080078, 72.03006744384766, 72.56317138671875, 71.61688804626465, 72.17121124267578, 70.20092010498047, 72.12018966674805, 73.34589958190918, 73.01592826843262, 71.49410247802734, 72.19099998474121] - - Median time (milliseconds): - 72.0270872116 - ~~~ - -As we saw earlier, the leaseholder for the `vehicles` table is in `us-west2-a` (Los Angeles), so our query had to go from the gateway node in `us-east1-b` all the way to the west coast and then back again before returning data to the client. - -For contrast, imagine we are now a Movr administrator in Los Angeles, and we want to get the IDs and descriptions of all Los Angeles-based bikes that are currently in use: - -1. SSH to the instance in `us-west2-a` with the Python client. - -2. Query for the data: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ {{page.app}} \ - --host=
\ - --statement="SELECT id, ext FROM vehicles \ - WHERE city = 'los angeles' \ - AND type = 'bike' \ - AND status = 'in_use'" \ - --repeat=50 \ - --times - ~~~ - - ~~~ - Result: - ['id', 'ext'] - ['00078349-94d4-43e6-92be-8b0d1ac7ee9f', "{u'color': u'blue', u'brand': u'Merida'}"] - ['003f84c4-fa14-47b2-92d4-35a3dddd2d75', "{u'color': u'red', u'brand': u'Kona'}"] - ['0107a133-7762-4392-b1d9-496eb30ee5f9', "{u'color': u'yellow', u'brand': u'Kona'}"] - ['0144498b-4c4f-4036-8465-93a6bea502a3', "{u'color': u'blue', u'brand': u'Pinarello'}"] - ['01476004-fb10-4201-9e56-aadeb427f98a', "{u'color': u'black', u'brand': u'Merida'}"] - - Times (milliseconds): - [782.6759815216064, 8.564949035644531, 8.226156234741211, 7.949113845825195, 7.86590576171875, 7.842063903808594, 7.674932479858398, 7.555961608886719, 7.642984390258789, 8.024930953979492, 7.717132568359375, 8.46409797668457, 7.520914077758789, 7.6541900634765625, 7.458925247192383, 7.671833038330078, 7.740020751953125, 7.771015167236328, 7.598161697387695, 8.411169052124023, 7.408857345581055, 7.469892501831055, 7.524967193603516, 7.764101028442383, 7.750988006591797, 7.2460174560546875, 6.927967071533203, 7.822990417480469, 7.27391242980957, 7.730960845947266, 7.4710845947265625, 7.4310302734375, 7.33494758605957, 7.455110549926758, 7.021188735961914, 7.083892822265625, 7.812976837158203, 7.625102996826172, 7.447957992553711, 7.179021835327148, 7.504940032958984, 7.224082946777344, 7.257938385009766, 7.714986801147461, 7.4939727783203125, 7.6160430908203125, 7.578849792480469, 7.890939712524414, 7.546901702880859, 7.411956787109375] - - Median time (milliseconds): - 7.6071023941 - ~~~ - -Because the leaseholder for `vehicles` is in the same zone as the client request, this query took just 7.60ms compared to the similar query in New York that took 72.02ms. - -#### Writes - -The geographic distribution of data impacts write performance as well. For example, imagine 100 people in Seattle and 100 people in New York want to create new Movr accounts: - -1. SSH to the instance in `us-west1-a` with the Python client. - -2. Create 100 Seattle-based users: - - {% include_cached copy-clipboard.html %} - ~~~ shell - {{page.app}} \ - --host=
\ - --statement="INSERT INTO users VALUES (gen_random_uuid(), 'seattle', 'Seatller', '111 East Street', '1736352379937347')" \ - --repeat=100 \ - --times - ~~~ - - ~~~ - Times (milliseconds): - [277.4538993835449, 50.12702941894531, 47.75214195251465, 48.13408851623535, 47.872066497802734, 48.65407943725586, 47.78695106506348, 49.14689064025879, 52.770137786865234, 49.00097846984863, 48.68602752685547, 47.387123107910156, 47.36208915710449, 47.6841926574707, 46.49209976196289, 47.06096649169922, 46.753883361816406, 46.304941177368164, 48.90894889831543, 48.63715171813965, 48.37393760681152, 49.23295974731445, 50.13418197631836, 48.310041427612305, 48.57516288757324, 47.62911796569824, 47.77693748474121, 47.505855560302734, 47.89996147155762, 49.79205131530762, 50.76479911804199, 50.21500587463379, 48.73299598693848, 47.55592346191406, 47.35088348388672, 46.7071533203125, 43.00808906555176, 43.1060791015625, 46.02813720703125, 47.91092872619629, 68.71294975280762, 49.241065979003906, 48.9039421081543, 47.82295227050781, 48.26998710632324, 47.631025314331055, 64.51892852783203, 48.12812805175781, 67.33417510986328, 48.603057861328125, 50.31013488769531, 51.02396011352539, 51.45716667175293, 50.85396766662598, 49.07512664794922, 47.49894142150879, 44.67201232910156, 43.827056884765625, 44.412851333618164, 46.69189453125, 49.55601692199707, 49.16882514953613, 49.88598823547363, 49.31306838989258, 46.875, 46.69594764709473, 48.31886291503906, 48.378944396972656, 49.0570068359375, 49.417972564697266, 48.22111129760742, 50.662994384765625, 50.58097839355469, 75.44088363647461, 51.05400085449219, 50.85110664367676, 48.187971115112305, 56.7781925201416, 42.47403144836426, 46.2191104888916, 53.96890640258789, 46.697139739990234, 48.99096488952637, 49.1330623626709, 46.34690284729004, 47.09315299987793, 46.39410972595215, 46.51689529418945, 47.58000373840332, 47.924041748046875, 48.426151275634766, 50.22597312927246, 50.1859188079834, 50.37498474121094, 49.861907958984375, 51.477909088134766, 73.09293746948242, 48.779964447021484, 45.13692855834961, 42.2968864440918] - - Median time (milliseconds): - 48.4025478363 - ~~~ - -3. SSH to the instance in `us-east1-b` with the Python client. - -4. Create 100 new NY-based users: - - {% include_cached copy-clipboard.html %} - ~~~ shell - {{page.app}} \ - --host=
\ - --statement="INSERT INTO users VALUES (gen_random_uuid(), 'new york', 'New Yorker', '111 West Street', '9822222379937347')" \ - --repeat=100 \ - --times - ~~~ - - ~~~ - Times (milliseconds): - [131.05082511901855, 116.88899993896484, 115.15498161315918, 117.095947265625, 121.04082107543945, 115.8750057220459, 113.80696296691895, 113.05880546569824, 118.41201782226562, 125.30899047851562, 117.5389289855957, 115.23890495300293, 116.84799194335938, 120.0411319732666, 115.62800407409668, 115.08989334106445, 113.37089538574219, 115.15498161315918, 115.96989631652832, 133.1961154937744, 114.25995826721191, 118.09396743774414, 122.24102020263672, 116.14608764648438, 114.80998992919922, 131.9139003753662, 114.54391479492188, 115.15307426452637, 116.7759895324707, 135.10799407958984, 117.18511581420898, 120.15485763549805, 118.0570125579834, 114.52388763427734, 115.28396606445312, 130.00011444091797, 126.45292282104492, 142.69423484802246, 117.60401725769043, 134.08493995666504, 117.47002601623535, 115.75007438659668, 117.98381805419922, 115.83089828491211, 114.88890647888184, 113.23404312133789, 121.1700439453125, 117.84791946411133, 115.35286903381348, 115.0820255279541, 116.99700355529785, 116.67394638061523, 116.1041259765625, 114.67289924621582, 112.98894882202148, 117.1119213104248, 119.78602409362793, 114.57300186157227, 129.58717346191406, 118.37983131408691, 126.68204307556152, 118.30306053161621, 113.27195167541504, 114.22920227050781, 115.80777168273926, 116.81294441223145, 114.76683616638184, 115.1430606842041, 117.29192733764648, 118.24417114257812, 116.56999588012695, 113.8620376586914, 114.88819122314453, 120.80597877502441, 132.39002227783203, 131.00910186767578, 114.56179618835449, 117.03896522521973, 117.72680282592773, 115.6010627746582, 115.27681350708008, 114.52317237854004, 114.87483978271484, 117.78903007507324, 116.65701866149902, 122.6949691772461, 117.65193939208984, 120.5449104309082, 115.61179161071777, 117.54202842712402, 114.70890045166016, 113.58809471130371, 129.7171115875244, 117.57993698120117, 117.1119213104248, 117.64001846313477, 140.66505432128906, 136.41691207885742, 116.24789237976074, 115.19908905029297] - - Median time (milliseconds): - 116.868495941 - ~~~ - -It took 48.40ms to create a user in Seattle and 116.86ms to create a user in New York. To better understand this discrepancy, let's look at the distribution of data for the `users` table: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql \ -{{page.certs}} \ ---host=
\ ---database=movr \ ---execute="SHOW EXPERIMENTAL_RANGES FROM TABLE users;" -~~~ - -~~~ - start_key | end_key | range_id | replicas | lease_holder -+-----------+---------+----------+----------+--------------+ - NULL | NULL | 49 | {2,6,8} | 6 -(1 row) -~~~ - -For the single range containing `users` data, one replica is in each zone, with the leaseholder in the `us-west1-a` zone. This means that: - -- When creating a user in Seattle, the request doesn't have to leave the zone to reach the leaseholder. However, since a write requires consensus from its replica group, the write has to wait for confirmation from either the replica in `us-west1-b` (Los Angeles) or `us-east1-b` (New York) before committing and then returning confirmation to the client. -- When creating a user in New York, there are more network hops and, thus, increased latency. The request first needs to travel across the continent to the leaseholder in `us-west1-a`. It then has to wait for confirmation from either the replica in `us-west1-b` (Los Angeles) or `us-east1-b` (New York) before committing and then returning confirmation to the client back in the east. diff --git a/src/current/_includes/v22.1/performance/tuning-secure.py b/src/current/_includes/v22.1/performance/tuning-secure.py deleted file mode 100644 index a644dbb1c87..00000000000 --- a/src/current/_includes/v22.1/performance/tuning-secure.py +++ /dev/null @@ -1,77 +0,0 @@ -#!/usr/bin/env python - -import argparse -import psycopg2 -import time - -parser = argparse.ArgumentParser( - description="test performance of statements against movr database") -parser.add_argument("--host", required=True, - help="ip address of one of the CockroachDB nodes") -parser.add_argument("--statement", required=True, - help="statement to execute") -parser.add_argument("--repeat", type=int, - help="number of times to repeat the statement", default = 20) -parser.add_argument("--times", - help="print time for each repetition of the statement", action="store_true") -parser.add_argument("--cumulative", - help="print cumulative time for all repetitions of the statement", action="store_true") -args = parser.parse_args() - -conn = psycopg2.connect( - database='movr', - user='root', - host=args.host, - port=26257, - sslmode='require', - sslrootcert='certs/ca.crt', - sslkey='certs/client.root.key', - sslcert='certs/client.root.crt' -) -conn.set_session(autocommit=True) -cur = conn.cursor() - -def median(lst): - n = len(lst) - if n < 1: - return None - if n % 2 == 1: - return sorted(lst)[n//2] - else: - return sum(sorted(lst)[n//2-1:n//2+1])/2.0 - -times = list() -for n in range(args.repeat): - start = time.time() - statement = args.statement - cur.execute(statement) - if n < 1: - if cur.description is not None: - colnames = [desc[0] for desc in cur.description] - print("") - print("Result:") - print(colnames) - rows = cur.fetchall() - for row in rows: - print([str(cell) for cell in row]) - end = time.time() - times.append((end - start)* 1000) - -cur.close() -conn.close() - -print("") -if args.times: - print("Times (milliseconds):") - print(times) - print("") -# print("Average time (milliseconds):") -# print(float(sum(times))/len(times)) -# print("") -print("Median time (milliseconds):") -print(median(times)) -print("") -if args.cumulative: - print("Cumulative time (milliseconds):") - print(sum(times)) - print("") diff --git a/src/current/_includes/v22.1/performance/tuning.py b/src/current/_includes/v22.1/performance/tuning.py deleted file mode 100644 index dcb567dad91..00000000000 --- a/src/current/_includes/v22.1/performance/tuning.py +++ /dev/null @@ -1,73 +0,0 @@ -#!/usr/bin/env python - -import argparse -import psycopg2 -import time - -parser = argparse.ArgumentParser( - description="test performance of statements against movr database") -parser.add_argument("--host", required=True, - help="ip address of one of the CockroachDB nodes") -parser.add_argument("--statement", required=True, - help="statement to execute") -parser.add_argument("--repeat", type=int, - help="number of times to repeat the statement", default = 20) -parser.add_argument("--times", - help="print time for each repetition of the statement", action="store_true") -parser.add_argument("--cumulative", - help="print cumulative time for all repetitions of the statement", action="store_true") -args = parser.parse_args() - -conn = psycopg2.connect( - database='movr', - user='root', - host=args.host, - port=26257 -) -conn.set_session(autocommit=True) -cur = conn.cursor() - -def median(lst): - n = len(lst) - if n < 1: - return None - if n % 2 == 1: - return sorted(lst)[n//2] - else: - return sum(sorted(lst)[n//2-1:n//2+1])/2.0 - -times = list() -for n in range(args.repeat): - start = time.time() - statement = args.statement - cur.execute(statement) - if n < 1: - if cur.description is not None: - colnames = [desc[0] for desc in cur.description] - print("") - print("Result:") - print(colnames) - rows = cur.fetchall() - for row in rows: - print([str(cell) for cell in row]) - end = time.time() - times.append((end - start)* 1000) - -cur.close() -conn.close() - -print("") -if args.times: - print("Times (milliseconds):") - print(times) - print("") -# print("Average time (milliseconds):") -# print(float(sum(times))/len(times)) -# print("") -print("Median time (milliseconds):") -print(median(times)) -print("") -if args.cumulative: - print("Cumulative time (milliseconds):") - print(sum(times)) - print("") diff --git a/src/current/_includes/v22.1/performance/use-hash-sharded-indexes.md b/src/current/_includes/v22.1/performance/use-hash-sharded-indexes.md deleted file mode 100644 index 715b378c9bb..00000000000 --- a/src/current/_includes/v22.1/performance/use-hash-sharded-indexes.md +++ /dev/null @@ -1 +0,0 @@ -For performance reasons, we discourage [indexing on sequential keys](indexes.html). If, however, you are working with a table that must be indexed on sequential keys, you should use [hash-sharded indexes](hash-sharded-indexes.html). Hash-sharded indexes distribute sequential traffic uniformly across ranges, eliminating single-range hot spots and improving write performance on sequentially-keyed indexes at a small cost to read performance. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/advertise-addr-join.md b/src/current/_includes/v22.1/prod-deployment/advertise-addr-join.md deleted file mode 100644 index 67019d1fcea..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/advertise-addr-join.md +++ /dev/null @@ -1,4 +0,0 @@ -Flag | Description ------|------------ -`--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.

This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).

In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking). -`--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising. diff --git a/src/current/_includes/v22.1/prod-deployment/aws-inbound-rules.md b/src/current/_includes/v22.1/prod-deployment/aws-inbound-rules.md deleted file mode 100644 index 8be748205a6..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/aws-inbound-rules.md +++ /dev/null @@ -1,31 +0,0 @@ -#### Inter-node and load balancer-node communication - - Field | Value --------|------------------- - Port Range | **26257** - Source | The ID of your security group (e.g., *sg-07ab277a*) - -#### Application data - - Field | Value --------|------------------- - Port Range | **26257** - Source | Your application's IP ranges - -#### DB Console - - Field | Value --------|------------------- - Port Range | **8080** - Source | Your network's IP ranges - -You can set your network IP by selecting "My IP" in the Source field. - -#### Load balancer-health check communication - - Field | Value --------|------------------- - Port Range | **8080** - Source | The IP range of your VPC in CIDR notation (e.g., 10.12.0.0/16) - - To get the IP range of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/backup.sh b/src/current/_includes/v22.1/prod-deployment/backup.sh deleted file mode 100644 index efcbd4c7041..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/backup.sh +++ /dev/null @@ -1,21 +0,0 @@ -#!/bin/bash - -set -euo pipefail - -# This script creates full backups when run on the configured -# day of the week and incremental backups when run on other days, and tracks -# recently created backups in a file to pass as the base for incremental backups. - -what="" # Leave empty for cluster backup, or add "DATABASE database_name" to backup a database. -base="/backups" # The URL where you want to store the backup. -extra="" # Any additional parameters that need to be appended to the BACKUP URI e.g., AWS key params. -recent=recent_backups.txt # File in which recent backups are tracked. -backup_parameters= # e.g., "WITH revision_history" - -# Customize the `cockroach sql` command with `--host`, `--certs-dir` or `--insecure`, `--port`, and additional flags as needed to connect to the SQL client. -runsql() { cockroach sql --insecure -e "$1"; } - -destination="${base}/$(date +"%Y-%V")${extra}" # %V is the week number of the year, with Monday as the first day of the week. - -runsql "BACKUP $what TO '$destination' AS OF SYSTEM TIME '-1m' $backup_parameters" -echo "backed up to ${destination}" diff --git a/src/current/_includes/v22.1/prod-deployment/check-sql-query-performance.md b/src/current/_includes/v22.1/prod-deployment/check-sql-query-performance.md deleted file mode 100644 index 1abfcc52778..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/check-sql-query-performance.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -If you aren't sure whether SQL query performance needs to be improved on your cluster, see [Identify slow statements](query-behavior-troubleshooting.html#identify-slow-queries). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/prod-deployment/cloud-report.md b/src/current/_includes/v22.1/prod-deployment/cloud-report.md deleted file mode 100644 index aa2a765af6e..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/cloud-report.md +++ /dev/null @@ -1 +0,0 @@ -Cockroach Labs creates a yearly cloud report focused on evaluating hardware performance. For more information, see the [2022 Cloud Report](https://www.cockroachlabs.com/guides/2022-cloud-report/). \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/cluster-unavailable-monitoring.md b/src/current/_includes/v22.1/prod-deployment/cluster-unavailable-monitoring.md deleted file mode 100644 index d4d8803ca1f..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/cluster-unavailable-monitoring.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -If the cluster becomes unavailable, the DB Console and Cluster API will also become unavailable. You can continue to monitor the cluster via the [Prometheus endpoint](monitoring-and-alerting.html#prometheus-endpoint) and [logs](logging-overview.html). -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/healthy-command-commit-latency.md b/src/current/_includes/v22.1/prod-deployment/healthy-command-commit-latency.md deleted file mode 100644 index d055f37aded..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/healthy-command-commit-latency.md +++ /dev/null @@ -1 +0,0 @@ -**Expected values for a healthy cluster**: On SSDs ([strongly recommended](recommended-production-settings.html#storage)), this should be between 1 and 100 milliseconds. On HDDs, this should be no more than 1 second. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/healthy-cpu-percent.md b/src/current/_includes/v22.1/prod-deployment/healthy-cpu-percent.md deleted file mode 100644 index a58b0b87973..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/healthy-cpu-percent.md +++ /dev/null @@ -1 +0,0 @@ -**Expected values for a healthy cluster**: CPU utilized by CockroachDB should not persistently exceed 80%. Because this metric does not reflect system CPU usage, values above 80% suggest that actual CPU utilization is nearing 100%. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/healthy-crdb-memory.md b/src/current/_includes/v22.1/prod-deployment/healthy-crdb-memory.md deleted file mode 100644 index a0994e08eed..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/healthy-crdb-memory.md +++ /dev/null @@ -1 +0,0 @@ -**Expected values for a healthy cluster**: RSS minus Go Total and CGo Total should not exceed 100 MiB. Go Allocated should not exceed a few hundred MiB. CGo Allocated should not exceed the `--cache` size. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/healthy-disk-ops-in-progress.md b/src/current/_includes/v22.1/prod-deployment/healthy-disk-ops-in-progress.md deleted file mode 100644 index e80714df120..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/healthy-disk-ops-in-progress.md +++ /dev/null @@ -1 +0,0 @@ -**Expected values for a healthy cluster**: This value should be 0 or single-digit values for short periods of time. If the values persist in double digits, you may have an I/O bottleneck. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/healthy-lsm.md b/src/current/_includes/v22.1/prod-deployment/healthy-lsm.md deleted file mode 100644 index 31fd320af2a..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/healthy-lsm.md +++ /dev/null @@ -1 +0,0 @@ -**Expected values for a healthy cluster**: The number of L0 files should **not** be in the high thousands. High values indicate heavy write load that is causing accumulation of files in level 0. These files are not being compacted quickly enough to lower levels, resulting in a misshapen LSM. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/healthy-node-heartbeat-latency.md b/src/current/_includes/v22.1/prod-deployment/healthy-node-heartbeat-latency.md deleted file mode 100644 index ed58182c98f..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/healthy-node-heartbeat-latency.md +++ /dev/null @@ -1 +0,0 @@ -**Expected values for a healthy cluster**: Less than 100ms in addition to the [network latency](ui-network-latency-page.html) between nodes in the cluster. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/healthy-read-amplification.md b/src/current/_includes/v22.1/prod-deployment/healthy-read-amplification.md deleted file mode 100644 index c7ffe9c6d17..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/healthy-read-amplification.md +++ /dev/null @@ -1 +0,0 @@ -**Expected values for a healthy cluster**: Read amplification factor should be in the single digits. A value exceeding 50 for 1 hour strongly suggests that the LSM tree has an unhealthy shape. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/healthy-sql-memory.md b/src/current/_includes/v22.1/prod-deployment/healthy-sql-memory.md deleted file mode 100644 index 0b963ed55b3..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/healthy-sql-memory.md +++ /dev/null @@ -1 +0,0 @@ -**Expected values for a healthy cluster**: This value should not exceed the [`--max-sql-memory`](recommended-production-settings.html#cache-and-sql-memory-size) size. A healthy threshold is 75% of allocated `--max-sql-memory`. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/healthy-storage-capacity.md b/src/current/_includes/v22.1/prod-deployment/healthy-storage-capacity.md deleted file mode 100644 index af6253c932d..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/healthy-storage-capacity.md +++ /dev/null @@ -1 +0,0 @@ -**Expected values for a healthy cluster**: Used capacity should not persistently exceed 80% of the total capacity. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/healthy-workload-concurrency.md b/src/current/_includes/v22.1/prod-deployment/healthy-workload-concurrency.md deleted file mode 100644 index 6e8d4891339..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/healthy-workload-concurrency.md +++ /dev/null @@ -1 +0,0 @@ -**Expected values for a healthy cluster**: At any time, the total number of actively executing SQL statements should not exceed 4 times the number of vCPUs in the cluster. For more details, see [Sizing connection pools](connection-pooling.html#sizing-connection-pools). \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/insecure-flag.md b/src/current/_includes/v22.1/prod-deployment/insecure-flag.md deleted file mode 100644 index a13951ba4bc..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/insecure-flag.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -The `--insecure` flag used in this tutorial is intended for non-production testing only. To run CockroachDB in production, use a secure cluster instead. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/prod-deployment/insecure-initialize-cluster.md b/src/current/_includes/v22.1/prod-deployment/insecure-initialize-cluster.md deleted file mode 100644 index 1bf99ee27c0..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/insecure-initialize-cluster.md +++ /dev/null @@ -1,12 +0,0 @@ -On your local machine, complete the node startup process and have them join together as a cluster: - -1. [Install CockroachDB](install-cockroachdb.html) on your local machine, if you haven't already. - -2. Run the [`cockroach init`](cockroach-init.html) command, with the `--host` flag set to the address of any node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach init --insecure --host=
- ~~~ - - Each node then prints helpful details to the [standard output](cockroach-start.html#standard-output), such as the CockroachDB version, the URL for the DB Console, and the SQL URL for clients. diff --git a/src/current/_includes/v22.1/prod-deployment/insecure-recommendations.md b/src/current/_includes/v22.1/prod-deployment/insecure-recommendations.md deleted file mode 100644 index e27b3489865..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/insecure-recommendations.md +++ /dev/null @@ -1,13 +0,0 @@ -- Consider using a [secure cluster](manual-deployment.html) instead. Using an insecure cluster comes with risks: - - Your cluster is open to any client that can access any node's IP addresses. - - Any user, even `root`, can log in without providing a password. - - Any user, connecting as `root`, can read or write any data in your cluster. - - There is no network encryption or authentication, and thus no confidentiality. - -- Decide how you want to access your DB Console: - - Access Level | Description - -------------|------------ - Partially open | Set a firewall rule to allow only specific IP addresses to communicate on port `8080`. - Completely open | Set a firewall rule to allow all IP addresses to communicate on port `8080`. - Completely closed | Set a firewall rule to disallow all communication on port `8080`. In this case, a machine with SSH access to a node could use an SSH tunnel to access the DB Console. diff --git a/src/current/_includes/v22.1/prod-deployment/insecure-requirements.md b/src/current/_includes/v22.1/prod-deployment/insecure-requirements.md deleted file mode 100644 index fb2faee26e8..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/insecure-requirements.md +++ /dev/null @@ -1,9 +0,0 @@ -- You must have [SSH access]({{page.ssh-link}}) to each machine. This is necessary for distributing and starting CockroachDB binaries. - -- Your network configuration must allow TCP communication on the following ports: - - `26257` for intra-cluster and client-cluster communication - - `8080` to expose your DB Console - -- Carefully review the [Production Checklist](recommended-production-settings.html) and recommended [Topology Patterns](topology-patterns.html). - -{% include {{ page.version.version }}/prod-deployment/topology-recommendations.md %} \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/insecure-scale-cluster.md b/src/current/_includes/v22.1/prod-deployment/insecure-scale-cluster.md deleted file mode 100644 index 335463e6db3..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/insecure-scale-cluster.md +++ /dev/null @@ -1,121 +0,0 @@ -You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). - -
- - -
-

- -
- -For each additional node you want to add to the cluster, complete the following steps: - -1. SSH to the machine where you want the node to run. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Run the [`cockroach start`](cockroach-start.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier). - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --advertise-addr= \ - --join=,, \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -5. Update your load balancer to recognize the new node. - -
- -
- -For each additional node you want to add to the cluster, complete the following steps: - -1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Create the Cockroach directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir /var/lib/cockroach - ~~~ - -5. Create a Unix user named `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ useradd cockroach - ~~~ - -6. Change the ownership of the `cockroach` directory to the user `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ chown cockroach /var/lib/cockroach - ~~~ - -7. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - {% include {{ page.version.version }}/prod-deployment/insecurecockroachdb.service %} - ~~~ - - {{site.data.alerts.callout_info}} - Previously, the sample configuration file set `TimeoutStopSec` to 60 seconds. This recommendation has been lengthened to 300 seconds, to give the `cockroach` process more time to stop gracefully. - {{site.data.alerts.end}} - - Save the file in the `/etc/systemd/system/` directory - -8. Customize the sample configuration template for your deployment: - - Specify values for the following flags in the sample configuration template: - - {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %} - -9. Repeat these steps for each additional node that you want in your cluster. - -
diff --git a/src/current/_includes/v22.1/prod-deployment/insecure-start-nodes.md b/src/current/_includes/v22.1/prod-deployment/insecure-start-nodes.md deleted file mode 100644 index 1a5f95e2b24..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/insecure-start-nodes.md +++ /dev/null @@ -1,192 +0,0 @@ -You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). - -
- - -
-

- -
- -For each initial node of your cluster, complete the following steps: - -{{site.data.alerts.callout_info}} -After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step. -{{site.data.alerts.end}} - -1. SSH to the machine where you want the node to run. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. CockroachDB uses custom-built versions of the [GEOS](spatial-glossary.html#geos) libraries. Copy these libraries to the location where CockroachDB expects to find them: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir -p /usr/local/lib/cockroach - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/ - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -5. Run the [`cockroach start`](cockroach-start.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --advertise-addr= \ - --join=,, \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - - This command primes the node to start, using the following flags: - - Flag | Description - -----|------------ - `--insecure` | Indicates that the cluster is insecure, with no network encryption or authentication. - `--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.

This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).

In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking). - `--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising. - `--cache`
`--max-sql-memory` | Increases the node's cache size to 25% of available system memory to improve read performance. The capacity for in-memory SQL processing defaults to 25% of system memory but can be raised, if necessary, to increase the number of simultaneous client connections allowed by the node as well as the node's capacity for in-memory processing of rows when using `ORDER BY`, `GROUP BY`, `DISTINCT`, joins, and window functions. For more details, see [Cache and SQL Memory Size](recommended-production-settings.html#cache-and-sql-memory-size). - `--background` | Starts the node in the background so you gain control of the terminal to issue more commands. - - When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](cockroach-start.html#locality). - - For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds DB Console HTTP requests to `--http-addr=localhost:8080`. To set these options manually, see [Start a Node](cockroach-start.html). - -6. Repeat these steps for each additional node that you want in your cluster. - -
- -
- -For each initial node of your cluster, complete the following steps: - -{{site.data.alerts.callout_info}} -After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step. -{{site.data.alerts.end}} - -1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. CockroachDB uses custom-built versions of the [GEOS](spatial-glossary.html#geos) libraries. Copy these libraries to the location where CockroachDB expects to find them: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir -p /usr/local/lib/cockroach - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/ - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -5. Create the Cockroach directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir /var/lib/cockroach - ~~~ - -6. Create a Unix user named `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ useradd cockroach - ~~~ - -7. Change the ownership of the `cockroach` directory to the user `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ chown cockroach /var/lib/cockroach - ~~~ - -8. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service) and save the file in the `/etc/systemd/system/` directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - {% include {{ page.version.version }}/prod-deployment/insecurecockroachdb.service %} - ~~~ - - {{site.data.alerts.callout_info}} - Previously, the sample configuration file set `TimeoutStopSec` to 60 seconds. This recommendation has been lengthened to 300 seconds, to give the `cockroach` process more time to stop gracefully. - {{site.data.alerts.end}} - -9. In the sample configuration template, specify values for the following flags: - - {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %} - - When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](cockroach-start.html#locality). - - For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds DB Console HTTP requests to `--http-port=8080`. To set these options manually, see [Start a Node](cockroach-start.html). - -10. Start the CockroachDB cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ systemctl start insecurecockroachdb - ~~~ - -11. Repeat these steps for each additional node that you want in your cluster. - -{{site.data.alerts.callout_info}} -`systemd` handles node restarts in case of node failure. To stop a node without `systemd` restarting it, run `systemctl stop insecurecockroachdb` -{{site.data.alerts.end}} - -
diff --git a/src/current/_includes/v22.1/prod-deployment/insecure-test-cluster.md b/src/current/_includes/v22.1/prod-deployment/insecure-test-cluster.md deleted file mode 100644 index 9f1d66fad3b..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/insecure-test-cluster.md +++ /dev/null @@ -1,41 +0,0 @@ -CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway. - -When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes. - -Use the [built-in SQL client](cockroach-sql.html) locally as follows: - -1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=
- ~~~ - -2. Create an `insecurenodetest` database: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE insecurenodetest; - ~~~ - -3. View the cluster's databases, which will include `insecurenodetest`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW DATABASES; - ~~~ - - ~~~ - +--------------------+ - | Database | - +--------------------+ - | crdb_internal | - | information_schema | - | insecurenodetest | - | pg_catalog | - | system | - +--------------------+ - (5 rows) - ~~~ - -4. Use `\q` to exit the SQL shell. diff --git a/src/current/_includes/v22.1/prod-deployment/insecure-test-load-balancing.md b/src/current/_includes/v22.1/prod-deployment/insecure-test-load-balancing.md deleted file mode 100644 index ae47b5cd160..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/insecure-test-load-balancing.md +++ /dev/null @@ -1,79 +0,0 @@ -CockroachDB comes with a number of [built-in workloads](cockroach-workload.html) for simulating client traffic. This step features CockroachDB's version of the [TPC-C](http://www.tpc.org/tpcc/) workload. - -{{site.data.alerts.callout_info}} -Be sure that you have configured your network to allow traffic from the application to the load balancer. In this case, you will run the sample workload on one of your machines. The traffic source should therefore be the **internal (private)** IP address of that machine. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_success}} -For comprehensive guidance on benchmarking CockroachDB with TPC-C, see [Performance Benchmarking](performance-benchmarking-with-tpcc-local.html). -{{site.data.alerts.end}} - -1. SSH to the machine where you want the run the sample TPC-C workload. - - This should be a machine that is not running a CockroachDB node. - -1. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -1. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -1. Use the [`cockroach workload`](cockroach-workload.html) command to load the initial schema and data, pointing it at the IP address of the load balancer: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload init tpcc \ - 'postgresql://root@:26257/tpcc?sslmode=disable' - ~~~ - -1. Use the `cockroach workload` command to run the workload for 10 minutes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload run tpcc \ - --duration=10m \ - 'postgresql://root@:26257/tpcc?sslmode=disable' - ~~~ - - You'll see per-operation statistics print to standard output every second: - - ~~~ - _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms) - 1s 0 1443.4 1494.8 4.7 9.4 27.3 67.1 transfer - 2s 0 1686.5 1590.9 4.7 8.1 15.2 28.3 transfer - 3s 0 1735.7 1639.0 4.7 7.3 11.5 28.3 transfer - 4s 0 1542.6 1614.9 5.0 8.9 12.1 21.0 transfer - 5s 0 1695.9 1631.1 4.7 7.3 11.5 22.0 transfer - 6s 0 1569.2 1620.8 5.0 8.4 11.5 15.7 transfer - 7s 0 1614.6 1619.9 4.7 8.1 12.1 16.8 transfer - 8s 0 1344.4 1585.6 5.8 10.0 15.2 31.5 transfer - 9s 0 1351.9 1559.5 5.8 10.0 16.8 54.5 transfer - 10s 0 1514.8 1555.0 5.2 8.1 12.1 16.8 transfer - ... - ~~~ - - After the specified duration (10 minutes in this case), the workload will stop and you'll see totals printed to standard output: - - ~~~ - _elapsed___errors_____ops(total)___ops/sec(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)__result - 600.0s 0 823902 1373.2 5.8 5.5 10.0 15.2 209.7 - ~~~ - - {{site.data.alerts.callout_success}} - For more `tpcc` options, use `cockroach workload run tpcc --help`. For details about other workloads built into the `cockroach` binary, use `cockroach workload --help`. - {{site.data.alerts.end}} - -1. To monitor the load generator's progress, open the [DB Console](ui-overview.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup. - - Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click **Metrics** on the left, select the **SQL** dashboard, and then check the **SQL Connections** graph. You can use the **Graph** menu to filter the graph for specific nodes. diff --git a/src/current/_includes/v22.1/prod-deployment/insecurecockroachdb.service b/src/current/_includes/v22.1/prod-deployment/insecurecockroachdb.service deleted file mode 100644 index 54d5ea2047a..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/insecurecockroachdb.service +++ /dev/null @@ -1,16 +0,0 @@ -[Unit] -Description=Cockroach Database cluster node -Requires=network.target -[Service] -Type=notify -WorkingDirectory=/var/lib/cockroach -ExecStart=/usr/local/bin/cockroach start --insecure --advertise-addr= --join=,, --cache=.25 --max-sql-memory=.25 -TimeoutStopSec=300 -Restart=always -RestartSec=10 -StandardOutput=syslog -StandardError=syslog -SyslogIdentifier=cockroach -User=cockroach -[Install] -WantedBy=default.target diff --git a/src/current/_includes/v22.1/prod-deployment/join-flag-multi-region.md b/src/current/_includes/v22.1/prod-deployment/join-flag-multi-region.md deleted file mode 100644 index 93ae34a8716..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/join-flag-multi-region.md +++ /dev/null @@ -1 +0,0 @@ -When starting a multi-region cluster, set more than one `--join` address per region, and select nodes that are spread across failure domains. This ensures [high availability](architecture/replication-layer.html#overview). \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/join-flag-single-region.md b/src/current/_includes/v22.1/prod-deployment/join-flag-single-region.md deleted file mode 100644 index 99250cdfee9..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/join-flag-single-region.md +++ /dev/null @@ -1 +0,0 @@ -For a cluster in a single region, set 3-5 `--join` addresses. Each starting node will attempt to contact one of the join hosts. In case a join host cannot be reached, the node will try another address on the list until it can join the gossip network. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/monitor-cluster.md b/src/current/_includes/v22.1/prod-deployment/monitor-cluster.md deleted file mode 100644 index 363ef1167c1..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/monitor-cluster.md +++ /dev/null @@ -1,3 +0,0 @@ -Despite CockroachDB's various [built-in safeguards against failure](frequently-asked-questions.html#how-does-cockroachdb-survive-failures), it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention. - -For details about available monitoring options and the most important events and metrics to alert on, see [Monitoring and Alerting](monitoring-and-alerting.html). diff --git a/src/current/_includes/v22.1/prod-deployment/process-termination.md b/src/current/_includes/v22.1/prod-deployment/process-termination.md deleted file mode 100644 index 23f9310572b..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/process-termination.md +++ /dev/null @@ -1,13 +0,0 @@ -{{site.data.alerts.callout_danger}} -We do not recommend sending `SIGKILL` to perform a "hard" shutdown, which bypasses CockroachDB's [node shutdown logic](#node-shutdown-sequence) and forcibly terminates the process. This can corrupt log files and, in certain edge cases, can result in temporary data unavailability, latency spikes, uncertainty errors, ambiguous commit errors, or query timeouts. When decommissioning, a hard shutdown will leave ranges under-replicated and vulnerable to another node failure, causing [quorum](architecture/replication-layer.html#overview) loss in the window before up-replication completes. -{{site.data.alerts.end}} - -- On production deployments, use the process manager to send `SIGTERM` to the process. - - - For example, with [`systemd`](https://www.freedesktop.org/wiki/Software/systemd/), run `systemctl stop {systemd config filename}`. - -- When using CockroachDB for local testing: - - - When running a server on the foreground, use `ctrl-c` in the terminal to send `SIGINT` to the process. - - - When running with the [`--background` flag](cockroach-start.html#general), use `pkill`, `kill`, or look up the process ID with `ps -ef | grep cockroach | grep -v grep` and then run `kill -TERM {process ID}`. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/prod-guidance-cache-max-sql-memory.md b/src/current/_includes/v22.1/prod-deployment/prod-guidance-cache-max-sql-memory.md deleted file mode 100644 index 0a6b979c581..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/prod-guidance-cache-max-sql-memory.md +++ /dev/null @@ -1 +0,0 @@ -For production deployments, set `--cache` to `25%` or higher. Avoid setting `--cache` and `--max-sql-memory` to a combined value of more than 75% of a machine's total RAM. Doing so increases the risk of memory-related failures. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/prod-guidance-connection-pooling.md b/src/current/_includes/v22.1/prod-deployment/prod-guidance-connection-pooling.md deleted file mode 100644 index 17b87a9988b..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/prod-guidance-connection-pooling.md +++ /dev/null @@ -1 +0,0 @@ -The total number of workload connections across all connection pools **should not exceed 4 times the number of vCPUs** in the cluster by a large amount. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/prod-guidance-disable-swap.md b/src/current/_includes/v22.1/prod-deployment/prod-guidance-disable-swap.md deleted file mode 100644 index f988eb016d4..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/prod-guidance-disable-swap.md +++ /dev/null @@ -1 +0,0 @@ -Disable Linux memory swapping. Over-allocating memory on production machines can lead to unexpected performance issues when pages have to be read back into memory. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/prod-guidance-larger-nodes.md b/src/current/_includes/v22.1/prod-deployment/prod-guidance-larger-nodes.md deleted file mode 100644 index c165a0130b7..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/prod-guidance-larger-nodes.md +++ /dev/null @@ -1 +0,0 @@ -To optimize for throughput, use larger nodes with up to 32 vCPUs. To further increase throughput, add more nodes to the cluster instead of increasing node size. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/prod-guidance-log-volume.md b/src/current/_includes/v22.1/prod-deployment/prod-guidance-log-volume.md deleted file mode 100644 index 7cc1a26ece7..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/prod-guidance-log-volume.md +++ /dev/null @@ -1 +0,0 @@ -Store CockroachDB [log files](configure-logs.html#logging-directory) in a separate volume from the main data store so that logging is not impacted by I/O throttling. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/prod-guidance-lvm.md b/src/current/_includes/v22.1/prod-deployment/prod-guidance-lvm.md deleted file mode 100644 index c1cd5885f1e..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/prod-guidance-lvm.md +++ /dev/null @@ -1 +0,0 @@ -Do not use LVM in the I/O path. Dynamically resizing CockroachDB store volumes can result in significant performance degradation. Using LVM snapshots in lieu of CockroachDB [backup and restore](take-full-and-incremental-backups.html) is also not supported. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/prod-guidance-store-volume.md b/src/current/_includes/v22.1/prod-deployment/prod-guidance-store-volume.md deleted file mode 100644 index c957422ce07..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/prod-guidance-store-volume.md +++ /dev/null @@ -1 +0,0 @@ -Use dedicated volumes for the CockroachDB [store](cockroach-start.html#store). Do not share the store volume with any other I/O activity. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/prod-see-also.md b/src/current/_includes/v22.1/prod-deployment/prod-see-also.md deleted file mode 100644 index 42ec5cd32c0..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/prod-see-also.md +++ /dev/null @@ -1,7 +0,0 @@ -- [Production Checklist](recommended-production-settings.html) -- [Manual Deployment](manual-deployment.html) -- [Orchestrated Deployment](kubernetes-overview.html) -- [Monitoring and Alerting](monitoring-and-alerting.html) -- [Performance Benchmarking](performance-benchmarking-with-tpcc-small.html) -- [Performance Tuning](performance-best-practices-overview.html) -- [Local Deployment](start-a-local-cluster.html) diff --git a/src/current/_includes/v22.1/prod-deployment/provision-cpu.md b/src/current/_includes/v22.1/prod-deployment/provision-cpu.md deleted file mode 100644 index 48896a432cd..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/provision-cpu.md +++ /dev/null @@ -1 +0,0 @@ -{% if include.threshold == "absolute_minimum" %}**4 vCPUs**{% elsif include.threshold == "minimum" %}**8 vCPUs**{% elsif include.threshold == "maximum" %}**32 vCPUs**{% endif %} diff --git a/src/current/_includes/v22.1/prod-deployment/provision-disk-io.md b/src/current/_includes/v22.1/prod-deployment/provision-disk-io.md deleted file mode 100644 index dadd7113e01..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/provision-disk-io.md +++ /dev/null @@ -1 +0,0 @@ -500 IOPS and 30 MB/s per vCPU diff --git a/src/current/_includes/v22.1/prod-deployment/provision-memory.md b/src/current/_includes/v22.1/prod-deployment/provision-memory.md deleted file mode 100644 index 98136337374..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/provision-memory.md +++ /dev/null @@ -1 +0,0 @@ -**4 GiB of RAM per vCPU** \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/provision-storage.md b/src/current/_includes/v22.1/prod-deployment/provision-storage.md deleted file mode 100644 index 89b4210fc4f..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/provision-storage.md +++ /dev/null @@ -1 +0,0 @@ -320 GiB per vCPU diff --git a/src/current/_includes/v22.1/prod-deployment/recommended-instances-aws.md b/src/current/_includes/v22.1/prod-deployment/recommended-instances-aws.md deleted file mode 100644 index 87d0f53e95c..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/recommended-instances-aws.md +++ /dev/null @@ -1,7 +0,0 @@ -- Use general-purpose [`m6i` or `m6a`](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/general-purpose-instances.html) VMs with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html). For example, Cockroach Labs has used `m6i.2xlarge` for performance benchmarking. If your workload requires high throughput, use network-optimized `m5n` instances. To simulate bare-metal deployments, use `m5d` with [SSD Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). - - - `m5` and `m5a` instances, and [compute-optimized `c5`, `c5a`, and `c5n`](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/compute-optimized-instances.html) instances, are also acceptable. - - {{site.data.alerts.callout_danger}} - **Do not** use [burstable performance instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances.html), which limit the load on a single core. - {{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/prod-deployment/recommended-instances-azure.md b/src/current/_includes/v22.1/prod-deployment/recommended-instances-azure.md deleted file mode 100644 index 74263dbe9d0..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/recommended-instances-azure.md +++ /dev/null @@ -1,7 +0,0 @@ -- Use general-purpose [Dsv5-series](https://docs.microsoft.com/en-us/azure/virtual-machines/dv5-dsv5-series) and [Dasv5-series](https://docs.microsoft.com/en-us/azure/virtual-machines/dasv5-dadsv5-series) or memory-optimized [Ev5-series](https://docs.microsoft.com/en-us/azure/virtual-machines/ev5-esv5-series) and [Easv5-series](https://docs.microsoft.com/en-us/azure/virtual-machines/easv5-eadsv5-series#easv5-series) VMs. For example, Cockroach Labs has used `Standard_D8s_v5`, `Standard_D8as_v5`, `Standard_E8s_v5`, and `Standard_e8as_v5` for performance benchmarking. - - - Compute-optimized [F-series](https://docs.microsoft.com/en-us/azure/virtual-machines/fsv2-series) VMs are also acceptable. - - {{site.data.alerts.callout_danger}} - Do not use ["burstable" B-series](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/b-series-burstable) VMs, which limit the load on CPU resources. Also, Cockroach Labs has experienced data corruption issues on A-series VMs and irregular disk performance on D-series VMs, so we recommend avoiding those as well. - {{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/recommended-instances-gcp.md b/src/current/_includes/v22.1/prod-deployment/recommended-instances-gcp.md deleted file mode 100644 index 6dbe048cd16..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/recommended-instances-gcp.md +++ /dev/null @@ -1,5 +0,0 @@ -- Use general-purpose [`t2d-standard`, `n2-standard`, or `n2d-standard`](https://cloud.google.com/compute/pricing#predefined_machine_types) VMs, or use [custom VMs](https://cloud.google.com/compute/docs/instances/creating-instance-with-custom-machine-type). For example, Cockroach Labs has used `t2d-standard-8`, `n2-standard-8`, and `n2d-standard-8` for performance benchmarking. - - {{site.data.alerts.callout_danger}} - Do not use `f1` or `g1` [shared-core machines](https://cloud.google.com/compute/docs/machine-types#sharedcore), which limit the load on CPU resources. - {{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/resolution-excessive-concurrency.md b/src/current/_includes/v22.1/prod-deployment/resolution-excessive-concurrency.md deleted file mode 100644 index 8d776db1dba..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/resolution-excessive-concurrency.md +++ /dev/null @@ -1 +0,0 @@ -To prevent issues with workload concurrency, [provision sufficient CPU](recommended-production-settings.html#sizing) and use [connection pooling](recommended-production-settings.html#connection-pooling) for the workload. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/resolution-inverted-lsm.md b/src/current/_includes/v22.1/prod-deployment/resolution-inverted-lsm.md deleted file mode 100644 index ac505cc6b68..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/resolution-inverted-lsm.md +++ /dev/null @@ -1 +0,0 @@ -If compaction has fallen behind and caused an [inverted LSM](architecture/storage-layer.html#inverted-lsms), throttle your workload concurrency to allow compaction to catch up and restore a healthy LSM shape. {% include {{ page.version.version }}/prod-deployment/prod-guidance-connection-pooling.md %} If a node is severely impacted, you can [start a new node](cockroach-start.html) and then [decommission the problematic node](node-shutdown.html?filters=decommission#remove-nodes). \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/resolution-oom-crash.md b/src/current/_includes/v22.1/prod-deployment/resolution-oom-crash.md deleted file mode 100644 index b2c6c96e356..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/resolution-oom-crash.md +++ /dev/null @@ -1 +0,0 @@ -To prevent OOM crashes, [provision sufficient memory](recommended-production-settings.html#memory). If all CockroachDB machines are provisioned and configured correctly, either run the CockroachDB process on another node with sufficient memory, or [reduce the memory allocated to CockroachDB](recommended-production-settings.html#cache-and-sql-memory-size). \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/resolution-untuned-query.md b/src/current/_includes/v22.1/prod-deployment/resolution-untuned-query.md deleted file mode 100644 index e81ff66a53b..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/resolution-untuned-query.md +++ /dev/null @@ -1 +0,0 @@ -If you find queries that are consuming too much memory, [cancel the queries](manage-long-running-queries.html#cancel-long-running-queries) to free up memory usage. For information on optimizing query performance, see [SQL Performance Best Practices](performance-best-practices-overview.html). \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/secure-generate-certificates.md b/src/current/_includes/v22.1/prod-deployment/secure-generate-certificates.md deleted file mode 100644 index 9870de5b0cf..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/secure-generate-certificates.md +++ /dev/null @@ -1,201 +0,0 @@ -You can use `cockroach cert` commands or [`openssl` commands](create-security-certificates-openssl.html) to generate security certificates. This section features the `cockroach cert` commands. - -Locally, you'll need to [create the following certificates and keys](cockroach-cert.html): - -- A certificate authority (CA) key pair (`ca.crt` and `ca.key`). -- A node key pair for each node, issued to its IP addresses and any common names the machine uses, as well as to the IP addresses and common names for machines running load balancers. -- A client key pair for the `root` user. You'll use this to run a sample workload against the cluster as well as some `cockroach` client commands from your local machine. - -{{site.data.alerts.callout_success}}Before beginning, it's useful to collect each of your machine's internal and external IP addresses, as well as any server names you want to issue certificates for.{{site.data.alerts.end}} - -1. [Install CockroachDB](install-cockroachdb.html) on your local machine, if you haven't already. - -2. Create two directories: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir certs - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir my-safe-directory - ~~~ - - `certs`: You'll generate your CA certificate and all node and client certificates and keys in this directory and then upload some of the files to your nodes. - - `my-safe-directory`: You'll generate your CA key in this directory and then reference the key when generating node and client certificates. After that, you'll keep the key safe and secret; you will not upload it to your nodes. - -3. Create the CA certificate and key: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-ca \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -4. Create the certificate and key for the first node, issued to all common names you might use to refer to the node as well as to the load balancer instances: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - \ - \ - \ - \ - localhost \ - 127.0.0.1 \ - \ - \ - \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -5. Upload the CA certificate and node certificate and key to the first node: - - {% if page.title contains "Google" %} - {% include_cached copy-clipboard.html %} - ~~~ shell - $ gcloud compute ssh \ - --project \ - --command "mkdir certs" - ~~~ - - {{site.data.alerts.callout_info}} - `gcloud compute ssh` associates your public SSH key with the GCP project and is only needed when connecting to the first node. See the [GCP docs](https://cloud.google.com/sdk/gcloud/reference/compute/ssh) for more details. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - - {% elsif page.title contains "AWS" %} - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ssh-add /path/.pem - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - - {% else %} - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - {% endif %} - -6. Delete the local copy of the node certificate and key: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ rm certs/node.crt certs/node.key - ~~~ - - {{site.data.alerts.callout_info}} - This is necessary because the certificates and keys for additional nodes will also be named `node.crt` and `node.key`. As an alternative to deleting these files, you can run the next `cockroach cert create-node` commands with the `--overwrite` flag. - {{site.data.alerts.end}} - -7. Create the certificate and key for the second node, issued to all common names you might use to refer to the node as well as to the load balancer instances: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - \ - \ - \ - \ - localhost \ - 127.0.0.1 \ - \ - \ - \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -8. Upload the CA certificate and node certificate and key to the second node: - - {% if page.title contains "AWS" %} - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - - {% else %} - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - {% endif %} - -9. Repeat steps 6 - 8 for each additional node. - -10. Create a client certificate and key for the `root` user: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-client \ - root \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -11. Upload the CA certificate and client certificate and key to the machine where you will run a sample workload: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/client.root.crt \ - certs/client.root.key \ - @:~/certs - ~~~ - - In later steps, you'll also use the `root` user's certificate to run [`cockroach`](cockroach-commands.html) client commands from your local machine. If you might also want to run `cockroach` client commands directly on a node (e.g., for local debugging), you'll need to copy the `root` user's certificate and key to that node as well. - -{{site.data.alerts.callout_info}} -On accessing the DB Console in a later step, your browser will consider the CockroachDB-created certificate invalid and you’ll need to click through a warning message to get to the UI. You can avoid this issue by [using a certificate issued by a public CA](create-security-certificates-custom-ca.html#accessing-the-db-console-for-a-secure-cluster). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/prod-deployment/secure-initialize-cluster.md b/src/current/_includes/v22.1/prod-deployment/secure-initialize-cluster.md deleted file mode 100644 index fc92a82b724..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/secure-initialize-cluster.md +++ /dev/null @@ -1,8 +0,0 @@ -On your local machine, run the [`cockroach init`](cockroach-init.html) command to complete the node startup process and have them join together as a cluster: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach init --certs-dir=certs --host=
-~~~ - -After running this command, each node prints helpful details to the [standard output](cockroach-start.html#standard-output), such as the CockroachDB version, the URL for the DB Console, and the SQL URL for clients. diff --git a/src/current/_includes/v22.1/prod-deployment/secure-recommendations.md b/src/current/_includes/v22.1/prod-deployment/secure-recommendations.md deleted file mode 100644 index 528850dbbb0..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/secure-recommendations.md +++ /dev/null @@ -1,7 +0,0 @@ -- Decide how you want to access your DB Console: - - Access Level | Description - -------------|------------ - Partially open | Set a firewall rule to allow only specific IP addresses to communicate on port `8080`. - Completely open | Set a firewall rule to allow all IP addresses to communicate on port `8080`. - Completely closed | Set a firewall rule to disallow all communication on port `8080`. In this case, a machine with SSH access to a node could use an SSH tunnel to access the DB Console. diff --git a/src/current/_includes/v22.1/prod-deployment/secure-requirements.md b/src/current/_includes/v22.1/prod-deployment/secure-requirements.md deleted file mode 100644 index 5c35b0898c8..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/secure-requirements.md +++ /dev/null @@ -1,11 +0,0 @@ -- You must have [CockroachDB installed](install-cockroachdb.html) locally. This is necessary for generating and managing your deployment's certificates. - -- You must have [SSH access]({{page.ssh-link}}) to each machine. This is necessary for distributing and starting CockroachDB binaries. - -- Your network configuration must allow TCP communication on the following ports: - - `26257` for intra-cluster and client-cluster communication - - `8080` to expose your DB Console - -- Carefully review the [Production Checklist](recommended-production-settings.html), including supported hardware and software, and the recommended [Topology Patterns](topology-patterns.html). - -{% include {{ page.version.version }}/prod-deployment/topology-recommendations.md %} \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/secure-scale-cluster.md b/src/current/_includes/v22.1/prod-deployment/secure-scale-cluster.md deleted file mode 100644 index 55af10fc740..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/secure-scale-cluster.md +++ /dev/null @@ -1,124 +0,0 @@ -You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). - -
- - -
-

- -
- -For each additional node you want to add to the cluster, complete the following steps: - -1. SSH to the machine where you want the node to run. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Run the [`cockroach start`](cockroach-start.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier). - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --certs-dir=certs \ - --advertise-addr= \ - --join=,, \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -5. Update your load balancer to recognize the new node. - -
- -
- -For each additional node you want to add to the cluster, complete the following steps: - -1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Create the Cockroach directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir /var/lib/cockroach - ~~~ - -5. Create a Unix user named `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ useradd cockroach - ~~~ - -6. Move the `certs` directory to the `cockroach` directory. - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mv certs /var/lib/cockroach/ - ~~~ - -7. Change the ownership of the `cockroach` directory to the user `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ chown -R cockroach /var/lib/cockroach - ~~~ - -8. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - {% include {{ page.version.version }}/prod-deployment/securecockroachdb.service %} - ~~~ - - Save the file in the `/etc/systemd/system/` directory. - -9. Customize the sample configuration template for your deployment: - - Specify values for the following flags in the sample configuration template: - - {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %} - -10. Repeat these steps for each additional node that you want in your cluster. - -
diff --git a/src/current/_includes/v22.1/prod-deployment/secure-start-nodes.md b/src/current/_includes/v22.1/prod-deployment/secure-start-nodes.md deleted file mode 100644 index abe72cdbc39..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/secure-start-nodes.md +++ /dev/null @@ -1,195 +0,0 @@ -You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). - -
- - -
-

- -
- -For each initial node of your cluster, complete the following steps: - -{{site.data.alerts.callout_info}} -After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step. -{{site.data.alerts.end}} - -1. SSH to the machine where you want the node to run. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. CockroachDB uses custom-built versions of the [GEOS](spatial-glossary.html#geos) libraries. Copy these libraries to the location where CockroachDB expects to find them: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir -p /usr/local/lib/cockroach - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/ - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -5. Run the [`cockroach start`](cockroach-start.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --certs-dir=certs \ - --advertise-addr= \ - --join=,, \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - - This command primes the node to start, using the following flags: - - Flag | Description - -----|------------ - `--certs-dir` | Specifies the directory where you placed the `ca.crt` file and the `node.crt` and `node.key` files for the node. - `--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.

This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).

In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking). - `--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising. - `--cache`
`--max-sql-memory` | Increases the node's cache size to 25% of available system memory to improve read performance. The capacity for in-memory SQL processing defaults to 25% of system memory but can be raised, if necessary, to increase the number of simultaneous client connections allowed by the node as well as the node's capacity for in-memory processing of rows when using `ORDER BY`, `GROUP BY`, `DISTINCT`, joins, and window functions. For more details, see [Cache and SQL Memory Size](recommended-production-settings.html#cache-and-sql-memory-size). - `--background` | Starts the node in the background so you gain control of the terminal to issue more commands. - - When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain [{{ site.data.products.enterprise }} features](enterprise-licensing.html). For more details, see [Locality](cockroach-start.html#locality). - - For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds DB Console HTTP requests to `--http-addr=:8080`. To set these options manually, see [Start a Node](cockroach-start.html). - -6. Repeat these steps for each additional node that you want in your cluster. - -
- -
- -For each initial node of your cluster, complete the following steps: - -{{site.data.alerts.callout_info}} -After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step. -{{site.data.alerts.end}} - -1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. CockroachDB uses custom-built versions of the [GEOS](spatial-glossary.html#geos) libraries. Copy these libraries to the location where CockroachDB expects to find them: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir -p /usr/local/lib/cockroach - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/ - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -5. Create the Cockroach directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir /var/lib/cockroach - ~~~ - -6. Create a Unix user named `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ useradd cockroach - ~~~ - -7. Move the `certs` directory to the `cockroach` directory. - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mv certs /var/lib/cockroach/ - ~~~ - -8. Change the ownership of the `cockroach` directory to the user `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ chown -R cockroach /var/lib/cockroach - ~~~ - -9. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service) and save the file in the `/etc/systemd/system/` directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - {% include {{ page.version.version }}/prod-deployment/securecockroachdb.service %} - ~~~ - -10. In the sample configuration template, specify values for the following flags: - - {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %} - - When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain [{{ site.data.products.enterprise }} features](enterprise-licensing.html). For more details, see [Locality](cockroach-start.html#locality). - - For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds DB Console HTTP requests to `--http-addr=localhost:8080`. To set these options manually, see [Start a Node](cockroach-start.html). - -11. Start the CockroachDB cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ systemctl start securecockroachdb - ~~~ - -11. Repeat these steps for each additional node that you want in your cluster. - -{{site.data.alerts.callout_info}} -`systemd` handles node restarts in case of node failure. To stop a node without `systemd` restarting it, run `systemctl stop securecockroachdb` -{{site.data.alerts.end}} - -
diff --git a/src/current/_includes/v22.1/prod-deployment/secure-test-cluster.md b/src/current/_includes/v22.1/prod-deployment/secure-test-cluster.md deleted file mode 100644 index cbd81488b0d..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/secure-test-cluster.md +++ /dev/null @@ -1,41 +0,0 @@ -CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway. - -When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes. - -Use the [built-in SQL client](cockroach-sql.html) locally as follows: - -1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --certs-dir=certs --host=
- ~~~ - -2. Create a `securenodetest` database: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE securenodetest; - ~~~ - -3. View the cluster's databases, which will include `securenodetest`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW DATABASES; - ~~~ - - ~~~ - +--------------------+ - | Database | - +--------------------+ - | crdb_internal | - | information_schema | - | securenodetest | - | pg_catalog | - | system | - +--------------------+ - (5 rows) - ~~~ - -4. Use `\q` to exit the SQL shell. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/secure-test-load-balancing.md b/src/current/_includes/v22.1/prod-deployment/secure-test-load-balancing.md deleted file mode 100644 index ea892f8ab33..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/secure-test-load-balancing.md +++ /dev/null @@ -1,77 +0,0 @@ -CockroachDB comes with a number of [built-in workloads](cockroach-workload.html) for simulating client traffic. This step features CockroachDB's version of the [TPC-C](http://www.tpc.org/tpcc/) workload. - -{{site.data.alerts.callout_info}} -Be sure that you have configured your network to allow traffic from the application to the load balancer. In this case, you will run the sample workload on one of your machines. The traffic source should therefore be the **internal (private)** IP address of that machine. -{{site.data.alerts.end}} - -For comprehensive guidance on benchmarking CockroachDB with TPC-C, refer to [Performance Benchmarking](performance-benchmarking-with-tpcc-local.html). - -1. SSH to the machine where you want to run the sample TPC-C workload. - - This should be a machine that is not running a CockroachDB node, and it should already have a `certs` directory containing `ca.crt`, `client.root.crt`, and `client.root.key` files. - -1. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -1. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -1. Use the [`cockroach workload`](cockroach-workload.html) command to load the initial schema and data, pointing it at the IP address of the load balancer: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload init tpcc \ - 'postgresql://root@:26257/tpcc?sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key' - ~~~ - -1. Use the `cockroach workload` command to run the workload for 10 minutes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload run tpcc \ - --duration=10m \ - 'postgresql://root@:26257/tpcc?sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key' - ~~~ - - You'll see per-operation statistics print to standard output every second: - - ~~~ - _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms) - 1s 0 1443.4 1494.8 4.7 9.4 27.3 67.1 transfer - 2s 0 1686.5 1590.9 4.7 8.1 15.2 28.3 transfer - 3s 0 1735.7 1639.0 4.7 7.3 11.5 28.3 transfer - 4s 0 1542.6 1614.9 5.0 8.9 12.1 21.0 transfer - 5s 0 1695.9 1631.1 4.7 7.3 11.5 22.0 transfer - 6s 0 1569.2 1620.8 5.0 8.4 11.5 15.7 transfer - 7s 0 1614.6 1619.9 4.7 8.1 12.1 16.8 transfer - 8s 0 1344.4 1585.6 5.8 10.0 15.2 31.5 transfer - 9s 0 1351.9 1559.5 5.8 10.0 16.8 54.5 transfer - 10s 0 1514.8 1555.0 5.2 8.1 12.1 16.8 transfer - ... - ~~~ - - After the specified duration (10 minutes in this case), the workload will stop and you'll see totals printed to standard output: - - ~~~ - _elapsed___errors_____ops(total)___ops/sec(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)__result - 600.0s 0 823902 1373.2 5.8 5.5 10.0 15.2 209.7 - ~~~ - - {{site.data.alerts.callout_success}} - For more `tpcc` options, use `cockroach workload run tpcc --help`. For details about other workloads built into the `cockroach` binary, use `cockroach workload --help`. - {{site.data.alerts.end}} - -1. To monitor the load generator's progress, open the [DB Console](ui-overview.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup. - - Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click **Metrics** on the left, select the **SQL** dashboard, and then check the **SQL Connections** graph. You can use the **Graph** menu to filter the graph for specific nodes. diff --git a/src/current/_includes/v22.1/prod-deployment/securecockroachdb.service b/src/current/_includes/v22.1/prod-deployment/securecockroachdb.service deleted file mode 100644 index 13658ae4cce..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/securecockroachdb.service +++ /dev/null @@ -1,16 +0,0 @@ -[Unit] -Description=Cockroach Database cluster node -Requires=network.target -[Service] -Type=notify -WorkingDirectory=/var/lib/cockroach -ExecStart=/usr/local/bin/cockroach start --certs-dir=certs --advertise-addr= --join=,, --cache=.25 --max-sql-memory=.25 -TimeoutStopSec=300 -Restart=always -RestartSec=10 -StandardOutput=syslog -StandardError=syslog -SyslogIdentifier=cockroach -User=cockroach -[Install] -WantedBy=default.target diff --git a/src/current/_includes/v22.1/prod-deployment/synchronize-clocks.md b/src/current/_includes/v22.1/prod-deployment/synchronize-clocks.md deleted file mode 100644 index ecd82f67d17..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/synchronize-clocks.md +++ /dev/null @@ -1,179 +0,0 @@ -CockroachDB requires moderate levels of [clock synchronization](recommended-production-settings.html#clock-synchronization) to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), it spontaneously shuts down. This avoids the risk of consistency anomalies, but it's best to prevent clocks from drifting too far in the first place by running clock synchronization software on each node. - -{% if page.title contains "Digital Ocean" or page.title contains "On-Premises" %} - -[`ntpd`](http://doc.ntp.org/) should keep offsets in the single-digit milliseconds, so that software is featured here, but other methods of clock synchronization are suitable as well. - -1. SSH to the first machine. - -2. Disable `timesyncd`, which tends to be active by default on some Linux distributions: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo timedatectl set-ntp no - ~~~ - - Verify that `timesyncd` is off: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ timedatectl - ~~~ - - Look for `Network time on: no` or `NTP enabled: no` in the output. - -3. Install the `ntp` package: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo apt-get install ntp - ~~~ - -4. Stop the NTP daemon: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo service ntp stop - ~~~ - -5. Sync the machine's clock with Google's NTP service: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo ntpd -b time.google.com - ~~~ - - To make this change permanent, in the `/etc/ntp.conf` file, remove or comment out any lines starting with `server` or `pool` and add the following lines: - - {% include_cached copy-clipboard.html %} - ~~~ - server time1.google.com iburst - server time2.google.com iburst - server time3.google.com iburst - server time4.google.com iburst - ~~~ - - Restart the NTP daemon: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo service ntp start - ~~~ - - {{site.data.alerts.callout_info}} - We recommend Google's NTP service because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the [Production Checklist](recommended-production-settings.html#considerations) for details. - {{site.data.alerts.end}} - -6. Verify that the machine is using a Google NTP server: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo ntpq -p - ~~~ - - The active NTP server will be marked with an asterisk. - -7. Repeat these steps for each machine where a CockroachDB node will run. - -{% elsif page.title contains "Google" %} - -Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as `pool.ntp.org`, will handle the leap second. Therefore, you should: - -- [Configure each GCE instance to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/configure-ntp#configure_ntp_for_your_instances). -- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, note that all of the nodes must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. - -{% elsif page.title contains "AWS" %} - -Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html), which uses a fleet of satellite-connected and atomic reference clocks in each AWS Region to deliver accurate current time readings. The service also smears the leap second. - -- [Configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). - - Per the above instructions, ensure that `etc/chrony.conf` on the instance contains the line `server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4` and that other `server` or `pool` lines are commented out. - - To verify that Amazon Time Sync Service is being used, run `chronyc sources -v` and check for a line containing `* 169.254.169.123`. The `*` denotes the preferred time server. -- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, note that all of the nodes must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. - -{% elsif page.title contains "Azure" %} - -[`ntpd`](http://doc.ntp.org/) should keep offsets in the single-digit milliseconds, so that software is featured here. However, to run `ntpd` properly on Azure VMs, it's necessary to first unbind the Time Synchronization device used by the Hyper-V technology running Azure VMs; this device aims to synchronize time between the VM and its host operating system but has been known to cause problems. - -1. SSH to the first machine. - -2. Find the ID of the Hyper-V Time Synchronization device: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/torvalds/linux/master/tools/hv/lsvmbus - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ python lsvmbus -vv | grep -w "Time Synchronization" -A 3 - ~~~ - - ~~~ - VMBUS ID 12: Class_ID = {9527e630-d0ae-497b-adce-e80ab0175caf} - [Time Synchronization] - Device_ID = {2dd1ce17-079e-403c-b352-a1921ee207ee} - Sysfs path: /sys/bus/vmbus/devices/2dd1ce17-079e-403c-b352-a1921ee207ee - Rel_ID=12, target_cpu=0 - ~~~ - -3. Unbind the device, using the `Device_ID` from the previous command's output: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ echo | sudo tee /sys/bus/vmbus/drivers/hv_utils/unbind - ~~~ - -4. Install the `ntp` package: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo apt-get install ntp - ~~~ - -5. Stop the NTP daemon: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo service ntp stop - ~~~ - -6. Sync the machine's clock with Google's NTP service: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo ntpd -b time.google.com - ~~~ - - To make this change permanent, in the `/etc/ntp.conf` file, remove or comment out any lines starting with `server` or `pool` and add the following lines: - - {% include_cached copy-clipboard.html %} - ~~~ - server time1.google.com iburst - server time2.google.com iburst - server time3.google.com iburst - server time4.google.com iburst - ~~~ - - Restart the NTP daemon: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo service ntp start - ~~~ - - {{site.data.alerts.callout_info}} - We recommend Google's NTP service because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the [Production Checklist](recommended-production-settings.html#considerations) for details. - {{site.data.alerts.end}} - -7. Verify that the machine is using a Google NTP server: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo ntpq -p - ~~~ - - The active NTP server will be marked with an asterisk. - -8. Repeat these steps for each machine where a CockroachDB node will run. - -{% endif %} diff --git a/src/current/_includes/v22.1/prod-deployment/terminology-vcpu.md b/src/current/_includes/v22.1/prod-deployment/terminology-vcpu.md deleted file mode 100644 index 790ce37a2b9..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/terminology-vcpu.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -In our sizing and production guidance, 1 vCPU is considered equivalent to 1 core in the underlying hardware platform. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/topology-recommendations.md b/src/current/_includes/v22.1/prod-deployment/topology-recommendations.md deleted file mode 100644 index 31384079cec..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/topology-recommendations.md +++ /dev/null @@ -1,19 +0,0 @@ -- Run each node on a separate machine. Since CockroachDB replicates across nodes, running more than one node per machine increases the risk of data loss if a machine fails. Likewise, if a machine has multiple disks or SSDs, run one node with multiple `--store` flags and not one node per disk. For more details about stores, see [Start a Node](cockroach-start.html#store). - -- When starting each node, use the [`--locality`](cockroach-start.html#locality) flag to describe the node's location, for example, `--locality=region=west,zone=us-west-1`. The key-value pairs should be ordered from most to least inclusive, and the keys and order of key-value pairs must be the same on all nodes. - -- When deploying in a single availability zone: - - - To be able to tolerate the failure of any 1 node, use at least 3 nodes with the [`default` 3-way replication factor](configure-replication-zones.html#view-the-default-replication-zone). In this case, if 1 node fails, each range retains 2 of its 3 replicas, a majority. - - - To be able to tolerate 2 simultaneous node failures, use at least 5 nodes and [increase the `default` replication factor for user data](configure-replication-zones.html#edit-the-default-replication-zone) to 5. The replication factor for [important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) is 5 by default, so no adjustments are needed for internal data. In this case, if 2 nodes fail at the same time, each range retains 3 of its 5 replicas, a majority. - -- When deploying across multiple availability zones: - - - To be able to tolerate the failure of 1 entire AZ in a region, use at least 3 AZs per region and set `--locality` on each node to spread data evenly across regions and AZs. In this case, if 1 AZ goes offline, the 2 remaining AZs retain a majority of replicas. - - - To ensure that ranges are split evenly across nodes, use the same number of nodes in each AZ. This is to avoid overloading any nodes with excessive resource consumption. - -- When deploying across multiple regions: - - - To be able to tolerate the failure of 1 entire region, use at least 3 regions. \ No newline at end of file diff --git a/src/current/_includes/v22.1/prod-deployment/use-cluster.md b/src/current/_includes/v22.1/prod-deployment/use-cluster.md deleted file mode 100644 index 0e65c9fb94c..00000000000 --- a/src/current/_includes/v22.1/prod-deployment/use-cluster.md +++ /dev/null @@ -1,12 +0,0 @@ -Now that your deployment is working, you can: - -1. [Implement your data model](sql-statements.html). -1. [Create users](create-user.html) and [grant them privileges](grant.html). -1. [Connect your application](install-client-drivers.html). Be sure to connect your application to the load balancer, not to a CockroachDB node. -1. [Take backups](take-full-and-incremental-backups.html) of your data. - -You may also want to adjust the way the cluster replicates data. For example, by default, a multi-node cluster replicates all data 3 times; you can change this replication factor or create additional rules for replicating individual databases and tables differently. For more information, see [Configure Replication Zones](configure-replication-zones.html). - -{{site.data.alerts.callout_danger}} -When running a cluster of 5 nodes or more, it's safest to [increase the replication factor for important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) to 5, even if you do not do so for user data. For the cluster as a whole to remain available, the ranges for this internal data must always retain a majority of their replicas. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/scram-authentication-recommendations.md b/src/current/_includes/v22.1/scram-authentication-recommendations.md deleted file mode 100644 index 2ad41f75cd8..00000000000 --- a/src/current/_includes/v22.1/scram-authentication-recommendations.md +++ /dev/null @@ -1,4 +0,0 @@ -- Test and adjust your workloads in batches when migrating to SCRAM authentication. -- Start by enabling SCRAM authentication in a testing environment, and test the performance of your client application against the types of workloads you expect it to handle in production before rolling the changes out to production. -- Limit the maximum number of connections in the client driver's connection pool. -- Limit the maximum number of concurrent transactions the client application can issue. diff --git a/src/current/_includes/v22.1/setup/create-a-free-cluster.md b/src/current/_includes/v22.1/setup/create-a-free-cluster.md deleted file mode 100644 index 101a57da5e0..00000000000 --- a/src/current/_includes/v22.1/setup/create-a-free-cluster.md +++ /dev/null @@ -1,7 +0,0 @@ -1. If you haven't already, sign up for a CockroachDB {{ site.data.products.cloud }} account. -1. [Log in](https://cockroachlabs.cloud/) to your CockroachDB {{ site.data.products.cloud }} account. -1. On the **Clusters** page, click **Create Cluster**. -1. On the **Create your cluster** page, select **Serverless**. -1. Click **Create cluster**. - - Your cluster will be created in a few seconds and the **Create SQL user** dialog will display. \ No newline at end of file diff --git a/src/current/_includes/v22.1/setup/create-first-sql-user.md b/src/current/_includes/v22.1/setup/create-first-sql-user.md deleted file mode 100644 index e85b983ba64..00000000000 --- a/src/current/_includes/v22.1/setup/create-first-sql-user.md +++ /dev/null @@ -1,8 +0,0 @@ -The **Create SQL user** dialog allows you to create a new SQL user and password. - -1. Enter a username in the **SQL user** field or use the one provided by default. -1. Click **Generate & save password**. -1. Copy the generated password and save it in a secure location. -1. Click **Next**. - - By default, all new SQL users are created with full privileges. For more information and to change the default settings, refer to [Manage SQL users on a cluster](../cockroachcloud/managing-access.html#manage-sql-users-on-a-cluster). \ No newline at end of file diff --git a/src/current/_includes/v22.1/setup/init-bank-sample.md b/src/current/_includes/v22.1/setup/init-bank-sample.md deleted file mode 100644 index 77cfd76c34d..00000000000 --- a/src/current/_includes/v22.1/setup/init-bank-sample.md +++ /dev/null @@ -1,38 +0,0 @@ -1. Set the `DATABASE_URL` environment variable to the connection string for your cluster: - -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - export DATABASE_URL="postgresql://root@localhost:26257?sslmode=disable" - ~~~ - -
- -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - export DATABASE_URL="{connection-string}" - ~~~ - - Where `{connection-string}` is the connection string you copied earlier. - -
- - -1. To initialize the example database, use the [`cockroach sql`](cockroach-sql.html) command to execute the SQL statements in the `dbinit.sql` file: - - {% include_cached copy-clipboard.html %} - ~~~ shell - cat dbinit.sql | cockroach sql --url $DATABASE_URL - ~~~ - - The SQL statement in the initialization file should execute: - - ~~~ - CREATE TABLE - - - Time: 102ms - ~~~ diff --git a/src/current/_includes/v22.1/setup/sample-setup-certs.md b/src/current/_includes/v22.1/setup/sample-setup-certs.md deleted file mode 100644 index e97f02a636e..00000000000 --- a/src/current/_includes/v22.1/setup/sample-setup-certs.md +++ /dev/null @@ -1,78 +0,0 @@ - -
- - -
- -
- - -

Choose your installation method

- -You can create a CockroachDB {{ site.data.products.serverless }} cluster using either the CockroachDB Cloud Console, a web-based graphical user interface (GUI) tool, or ccloud, a command-line interface (CLI) tool. - -
- - -
- -
- -### Create a free cluster - -{% include {{ page.version.version }}/setup/create-a-free-cluster.md %} - -### Create a SQL user - -{% include {{ page.version.version }}/setup/create-first-sql-user.md %} - -### Get the root certificate - -The **Connect to cluster** dialog shows information about how to connect to your cluster. - -1. Select **General connection string** from the **Select option** dropdown. -1. Open a new terminal on your local machine, and run the **CA Cert download command** provided in the **Download CA Cert** section. The client driver used in this tutorial requires this certificate to connect to CockroachDB {{ site.data.products.cloud }}. - -### Get the connection string - -Open the **General connection string** section, then copy the connection string provided and save it in a secure location. - -{{site.data.alerts.callout_info}} -The connection string is pre-populated with your username, password, cluster name, and other details. Your password, in particular, will be provided *only once*. Save it in a secure place (Cockroach Labs recommends a password manager) to connect to your cluster in the future. If you forget your password, you can reset it by going to the **SQL Users** page for the cluster, found at `https://cockroachlabs.cloud/cluster//users`. -{{site.data.alerts.end}} - -
- -
- -Follow these steps to create a CockroachDB {{ site.data.products.serverless }} cluster using the ccloud CLI tool. - -{{site.data.alerts.callout_info}} -The ccloud CLI tool is in Preview. -{{site.data.alerts.end}} - -

Install ccloud

- -{% include cockroachcloud/ccloud/install-ccloud.md %} - -### Run `ccloud quickstart` to create a new cluster, create a SQL user, and retrieve the connection string. - -{% include cockroachcloud/ccloud/quickstart.md %} - -Select **General connection string**, then copy the connection string displayed and save it in a secure location. The connection string is the line starting `postgresql://`. - -~~~ -? How would you like to connect? General connection string -Retrieving cluster info: succeeded - Downloading cluster cert to /Users/maxroach/.postgresql/root.crt: succeeded -postgresql://maxroach:ThisIsNotAGoodPassword@blue-dog-147.6wr.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full&sslrootcert=%2FUsers%2Fmaxroach%2F.postgresql%2Froot.crt -~~~ -
- -
- -
- -{% include {{ page.version.version }}/setup/start-single-node-insecure.md %} - -
diff --git a/src/current/_includes/v22.1/setup/sample-setup-jdbc.md b/src/current/_includes/v22.1/setup/sample-setup-jdbc.md deleted file mode 100644 index 264e75e8ea7..00000000000 --- a/src/current/_includes/v22.1/setup/sample-setup-jdbc.md +++ /dev/null @@ -1,74 +0,0 @@ - -
- - -
- -
- -

Choose your installation method

- -You can create a CockroachDB {{ site.data.products.serverless }} cluster using either the CockroachDB Cloud Console, a web-based graphical user interface (GUI) tool, or ccloud, a command-line interface (CLI) tool. - -
- - -
- -
- -### Create a free cluster - -{% include {{ page.version.version }}/setup/create-a-free-cluster.md %} - -### Create a SQL user - -{% include {{ page.version.version }}/setup/create-first-sql-user.md %} - -### Get the connection string - -The **Connect to cluster** dialog shows information about how to connect to your cluster. - -1. Select **Java** from the **Select option/language** dropdown. -1. Select **JDBC** from the **Select tool** dropdown. -1. Copy the command provided to set the `JDBC_DATABASE_URL` environment variable. - - {{site.data.alerts.callout_info}} - The JDBC connection URL is pre-populated with your username, password, cluster name, and other details. Your password, in particular, will be provided *only once*. Save it in a secure place (Cockroach Labs recommends a password manager) to connect to your cluster in the future. If you forget your password, you can reset it by going to the **SQL Users** page for the cluster, found at `https://cockroachlabs.cloud/cluster//users`. - {{site.data.alerts.end}} - -
- -
- -Follow these steps to create a CockroachDB {{ site.data.products.serverless }} cluster using the ccloud CLI tool. - -{{site.data.alerts.callout_info}} -The ccloud CLI tool is in Preview. -{{site.data.alerts.end}} - -

Install ccloud

- -{% include cockroachcloud/ccloud/install-ccloud.md %} - -### Run `ccloud quickstart` to create a new cluster, create a SQL user, and retrieve the connection string. - -{% include cockroachcloud/ccloud/quickstart.md %} - -Select **General connection string**, then copy the connection string displayed and save it in a secure location. The connection string is the line starting `postgresql://`. - -~~~ -? How would you like to connect? General connection string -Retrieving cluster info: succeeded - Downloading cluster cert to /Users/maxroach/.postgresql/root.crt: succeeded -postgresql://maxroach:ThisIsNotAGoodPassword@blue-dog-147.6wr.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full&sslrootcert=%2FUsers%2Fmaxroach%2F.postgresql%2Froot.crt -~~~ -
- -
- -
- -{% include {{ page.version.version }}/setup/start-single-node-insecure.md %} - -
\ No newline at end of file diff --git a/src/current/_includes/v22.1/setup/sample-setup-parameters-certs.md b/src/current/_includes/v22.1/setup/sample-setup-parameters-certs.md deleted file mode 100644 index d2ecc91fd78..00000000000 --- a/src/current/_includes/v22.1/setup/sample-setup-parameters-certs.md +++ /dev/null @@ -1,85 +0,0 @@ - -
- - -
- -
- - -

Choose your installation method

- -You can install a CockroachDB {{ site.data.products.serverless }} cluster using either the CockroachDB Cloud Console, a web-based graphical user interface (GUI) tool, or ccloud, a command-line interface (CLI) tool. - -
- - -
- -
- -### Create a free cluster - -{% include {{ page.version.version }}/setup/create-a-free-cluster.md %} - -### Create a SQL user - -{% include {{ page.version.version }}/setup/create-first-sql-user.md %} - -### Get the root certificate - -The **Connect to cluster** dialog shows information about how to connect to your cluster. - -1. Select **General connection string** from the **Select option** dropdown. -1. Open a new terminal on your local machine, and run the **CA Cert download command** provided in the **Download CA Cert** section. The client driver used in this tutorial requires this certificate to connect to CockroachDB {{ site.data.products.cloud }}. - -### Get the connection information - -1. Select **Parameters only** from the **Select option** dropdown. -1. Copy the connection information for each parameter displayed and save it in a secure location. - -
- -
- -Follow these steps to create a CockroachDB {{ site.data.products.serverless }} cluster using the ccloud CLI tool. - -{{site.data.alerts.callout_info}} -The ccloud CLI tool is in Preview. -{{site.data.alerts.end}} - -

Install ccloud

- -{% include cockroachcloud/ccloud/install-ccloud.md %} - -### Run `ccloud quickstart` to create a new cluster, create a SQL user, and retrieve the connection string. - -{% include cockroachcloud/ccloud/quickstart.md %} - -Select **Parameters only** then copy the connection parameters displayed and save them in a secure location. - -~~~ -? How would you like to connect? Parameters only -Looking up cluster ID: succeeded -Creating SQL user: succeeded -Success! Created SQL user - name: maxroach - cluster: 37174250-b944-461f-b1c1-3a99edb6af32 -Retrieving cluster info: succeeded -Connection parameters - Database: defaultdb - Host: blue-dog-147.6wr.cockroachlabs.cloud - Password: ThisIsNotAGoodPassword - Port: 26257 - Username: maxroach -~~~ - -
- -
- -
- -{% include {{ page.version.version }}/setup/start-single-node-insecure.md %} - -
diff --git a/src/current/_includes/v22.1/setup/sample-setup-parameters.md b/src/current/_includes/v22.1/setup/sample-setup-parameters.md deleted file mode 100644 index 3d8dd5da95e..00000000000 --- a/src/current/_includes/v22.1/setup/sample-setup-parameters.md +++ /dev/null @@ -1,79 +0,0 @@ - -
- - -
- -
- -

Choose your installation method

- -You can install a CockroachDB {{ site.data.products.serverless }} cluster using either the CockroachDB Cloud Console, a web-based graphical user interface (GUI) tool, or ccloud, a command-line interface (CLI) tool. - -
- - -
- -
- -### Create a free cluster - -{% include {{ page.version.version }}/setup/create-a-free-cluster.md %} - -### Create a SQL user - -{% include {{ page.version.version }}/setup/create-first-sql-user.md %} - -### Get the connection information - -The **Connect to cluster** dialog shows information about how to connect to your cluster. - -1. Select **Parameters only** from the **Select option** dropdown. -1. Copy the connection information for each parameter displayed and save it in a secure location. - -
- -
- -Follow these steps to create a CockroachDB {{ site.data.products.serverless }} cluster using the ccloud CLI tool. - -{{site.data.alerts.callout_info}} -The ccloud CLI tool is in Preview. -{{site.data.alerts.end}} - -

Install ccloud

- -{% include cockroachcloud/ccloud/install-ccloud.md %} - -### Run `ccloud quickstart` to create a new cluster, create a SQL user, and retrieve the connection string. - -{% include cockroachcloud/ccloud/quickstart.md %} - -Select **Parameters only** then copy the connection parameters displayed and save them in a secure location. - -~~~ -? How would you like to connect? Parameters only -Looking up cluster ID: succeeded -Creating SQL user: succeeded -Success! Created SQL user - name: maxroach - cluster: 37174250-b944-461f-b1c1-3a99edb6af32 -Retrieving cluster info: succeeded -Connection parameters - Database: defaultdb - Host: blue-dog-147.6wr.cockroachlabs.cloud - Password: ThisIsNotAGoodPassword - Port: 26257 - Username: maxroach -~~~ - -
- -
- -
- -{% include {{ page.version.version }}/setup/start-single-node-insecure.md %} - -
diff --git a/src/current/_includes/v22.1/setup/sample-setup.md b/src/current/_includes/v22.1/setup/sample-setup.md deleted file mode 100644 index be6bb78e6ac..00000000000 --- a/src/current/_includes/v22.1/setup/sample-setup.md +++ /dev/null @@ -1,75 +0,0 @@ - -
- - -
- -
- -

Choose your installation method

- -You can install a CockroachDB {{ site.data.products.serverless }} cluster using either the CockroachDB Cloud Console, a web-based graphical user interface (GUI) tool, or ccloud, a command-line interface (CLI) tool. - -
- - -
- -
- -### Create a free cluster - -{% include {{ page.version.version }}/setup/create-a-free-cluster.md %} - -### Create a SQL user - -{% include {{ page.version.version }}/setup/create-first-sql-user.md %} - -### Get the connection string - -The **Connect to cluster** dialog shows information about how to connect to your cluster. - -1. Select **General connection string** from the **Select option** dropdown. -1. Open the **General connection string** section, then copy the connection string provided and save it in a secure location. - - The sample application used in this tutorial uses system CA certificates for server certificate verification, so you can skip the **Download CA Cert** instructions. - - {{site.data.alerts.callout_info}} - The connection string is pre-populated with your username, password, cluster name, and other details. Your password, in particular, will be provided *only once*. Save it in a secure place (Cockroach Labs recommends a password manager) to connect to your cluster in the future. If you forget your password, you can reset it by going to the **SQL Users** page for the cluster, found at `https://cockroachlabs.cloud/cluster//users`. - {{site.data.alerts.end}} - -
- -
- -Follow these steps to create a CockroachDB {{ site.data.products.serverless }} cluster using the ccloud CLI tool. - -{{site.data.alerts.callout_info}} -The ccloud CLI tool is in Preview. -{{site.data.alerts.end}} - -

Install ccloud

- -{% include cockroachcloud/ccloud/install-ccloud.md %} - -### Run `ccloud quickstart` to create a new cluster, create a SQL user, and retrieve the connection string. - -{% include cockroachcloud/ccloud/quickstart.md %} - -Select **General connection string**, then copy the connection string displayed and save it in a secure location. The connection string is the line starting `postgresql://`. - -~~~ -? How would you like to connect? General connection string -Retrieving cluster info: succeeded - Downloading cluster cert to /Users/maxroach/.postgresql/root.crt: succeeded -postgresql://maxroach:ThisIsNotAGoodPassword@blue-dog-147.6wr.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full&sslrootcert=%2FUsers%2Fmaxroach%2F.postgresql%2Froot.crt -~~~ -
- -
- -
- -{% include {{ page.version.version }}/setup/start-single-node-insecure.md %} - -
\ No newline at end of file diff --git a/src/current/_includes/v22.1/setup/start-single-node-insecure.md b/src/current/_includes/v22.1/setup/start-single-node-insecure.md deleted file mode 100644 index 3807ba7208d..00000000000 --- a/src/current/_includes/v22.1/setup/start-single-node-insecure.md +++ /dev/null @@ -1,22 +0,0 @@ -1. If you haven't already, [download the CockroachDB binary](install-cockroachdb.html). -1. Run the [`cockroach start-single-node`](cockroach-start-single-node.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start-single-node --advertise-addr 'localhost' --insecure - ~~~ - - This starts an insecure, single-node cluster. -1. Take note of the following connection information in the SQL shell welcome text: - - ~~~ - CockroachDB node starting at 2021-08-30 17:25:30.06524 +0000 UTC (took 4.3s) - build: CCL v21.1.6 @ 2021/07/20 15:33:43 (go1.15.11) - webui: http://localhost:8080 - sql: postgresql://root@localhost:26257?sslmode=disable - ~~~ - - You'll use the `sql` connection string to connect to the cluster later in this tutorial. - - -{% include {{ page.version.version }}/prod-deployment/insecure-flag.md %} \ No newline at end of file diff --git a/src/current/_includes/v22.1/sidebar-data/deploy.json b/src/current/_includes/v22.1/sidebar-data/deploy.json deleted file mode 100644 index c415484db6c..00000000000 --- a/src/current/_includes/v22.1/sidebar-data/deploy.json +++ /dev/null @@ -1,354 +0,0 @@ -{ - "title": "Deploy", - "is_top_level": true, - "items": [ - { - "title": "Deployment Options", - "items": [ - { - "title": "CockroachDB Cloud", - "items": [ - { - "title": "Create an Account", - "urls": [ - "/cockroachcloud/create-an-account.html" - ] - }, - { - "title": "CockroachDB Serverless", - "items": [ - { - "title": "Plan a CockroachDB Serverless (Basic) Cluster", - "urls": [ - "/cockroachcloud/plan-your-cluster-basic.html" - ] - }, - { - "title": "Create a CockroachDB Serverless (Basic) Cluster", - "urls": [ - "/cockroachcloud/create-a-basic-cluster.html" - ] - }, - { - "title": "Connect to Your Cluster", - "urls": [ - "/cockroachcloud/connect-to-a-basic-cluster.html" - ] - }, - { - "title": "Optimize Your CockroachDB Serverless Workload", - "urls": [ - "/cockroachcloud/resource-usage.html" - ] - } - ] - }, - { - "title": "CockroachDB Dedicated", - "items": [ - { - "title": "Plan a CockroachDB Dedicated Cluster", - "urls": [ - "/cockroachcloud/plan-your-cluster.html" - ] - }, - { - "title": "Create a CockroachDB Dedicated Cluster", - "urls": [ - "/cockroachcloud/create-your-cluster.html" - ] - }, - { - "title": "Connect to Your Cluster", - "urls": [ - "/cockroachcloud/connect-to-your-cluster.html" - ] - }, - { - "title": "Move into Production", - "urls": [ - "/cockroachcloud/production-checklist.html" - ] - } - ] - } - ] - }, - { - "title": "CockroachDB Self-Hosted", - "items": [ - { - "title": "Get Started", - "items": [ - { - "title": "Install CockroachDB", - "urls": [ - "/${VERSION}/install-cockroachdb.html", - "/${VERSION}/install-cockroachdb-mac.html", - "/${VERSION}/install-cockroachdb-linux.html", - "/${VERSION}/install-cockroachdb-windows.html" - ] - }, - { - "title": "Start a Local Cluster", - "items": [ - { - "title": "Start From Binary", - "urls": [ - "/${VERSION}/secure-a-cluster.html", - "/${VERSION}/start-a-local-cluster.html" - ] - }, - { - "title": "Start In Kubernetes", - "urls": [ - "/${VERSION}/orchestrate-a-local-cluster-with-kubernetes.html", - "/${VERSION}/orchestrate-a-local-cluster-with-kubernetes-insecure.html" - ] - }, - { - "title": "Start In Docker", - "urls": [ - "/${VERSION}/start-a-local-cluster-in-docker-mac.html", - "/${VERSION}/start-a-local-cluster-in-docker-linux.html", - "/${VERSION}/start-a-local-cluster-in-docker-windows.html" - ] - }, - { - "title": "Simulate a Multi-Region Cluster on localhost", - "urls": [ - "/${VERSION}/simulate-a-multi-region-cluster-on-localhost.html" - ] - } - ] - } - ] - }, - { - "title": "Production Checklist", - "urls": [ - "/${VERSION}/recommended-production-settings.html" - ] - }, - { - "title": "Kubernetes Deployment", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/kubernetes-overview.html" - ] - }, - { - "title": "Single-Cluster Deployment", - "urls": [ - "/${VERSION}/deploy-cockroachdb-with-kubernetes.html", - "/${VERSION}/deploy-cockroachdb-with-kubernetes-insecure.html" - ] - }, - { - "title": "OpenShift Deployment", - "urls": [ - "/${VERSION}/deploy-cockroachdb-with-kubernetes-openshift.html" - ] - }, - { - "title": "Multi-Cluster Deployment", - "urls": [ - "/${VERSION}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.html" - ] - } - ] - }, - { - "title": "Manual Deployment", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/manual-deployment.html" - ] - }, - { - "title": "On-Premises Deployment", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-premises.html", - "/${VERSION}/deploy-cockroachdb-on-premises-insecure.html" - ] - }, - { - "title": "Deploy on AWS", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-aws.html", - "/${VERSION}/deploy-cockroachdb-on-aws-insecure.html" - ] - }, - { - "title": "Deploy on Azure", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-microsoft-azure.html", - "/${VERSION}/deploy-cockroachdb-on-microsoft-azure-insecure.html" - ] - }, - { - "title": "Deploy on Digital Ocean", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-digital-ocean.html", - "/${VERSION}/deploy-cockroachdb-on-digital-ocean-insecure.html" - ] - }, - { - "title": "Deploy on Google Cloud Platform GCE", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-google-cloud-platform.html", - "/${VERSION}/deploy-cockroachdb-on-google-cloud-platform-insecure.html" - ] - } - ] - } - ] - } - ] - }, - { - "title": "Multi-Region Capabilities", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/multiregion-overview.html" - ] - }, - { - "title": "How to Choose a Multi-Region Configuration", - "urls": [ - "/${VERSION}/choosing-a-multi-region-configuration.html" - ] - }, - { - "title": "When to Use ZONE vs. REGION Survival Goals", - "urls": [ - "/${VERSION}/when-to-use-zone-vs-region-survival-goals.html" - ] - }, - { - "title": "When to Use REGIONAL vs. GLOBAL Tables", - "urls": [ - "/${VERSION}/when-to-use-regional-vs-global-tables.html" - ] - }, - { - "title": "Data Domiciling with CockroachDB", - "urls": [ - "/${VERSION}/data-domiciling.html" - ] - }, - { - "title": "Migrate to Multi-Region SQL", - "urls": [ - "/${VERSION}/migrate-to-multiregion-sql.html" - ] - }, - { - "title": "Table Partitioning", - "urls": [ - "/${VERSION}/partitioning.html" - ] - }, - { - "title": "Topology Patterns", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/topology-patterns.html" - ] - }, - { - "title": "Development", - "urls": [ - "/${VERSION}/topology-development.html" - ] - }, - { - "title": "Basic Production", - "urls": [ - "/${VERSION}/topology-basic-production.html" - ] - }, - { - "title": "Regional Tables", - "urls": [ - "/${VERSION}/regional-tables.html" - ] - }, - { - "title": "Global Tables", - "urls": [ - "/${VERSION}/global-tables.html" - ] - }, - { - "title": "Follower Reads", - "urls": [ - "/${VERSION}/topology-follower-reads.html" - ] - }, - { - "title": "Follow-the-Workload", - "urls": [ - "/${VERSION}/topology-follow-the-workload.html" - ] - } - ] - } - ] - }, - { - "title": "Explore CockroachDB Features", - "items": [{ - "title": "Replication & Rebalancing", - "urls": [ - "/${VERSION}/demo-replication-and-rebalancing.html" - ] - }, - { - "title": "Fault Tolerance & Recovery", - "urls": [ - "/${VERSION}/demo-fault-tolerance-and-recovery.html" - ] - }, - { - "title": "Multi-Region Performance", - "urls": [ - "/${VERSION}/demo-low-latency-multi-region-deployment.html" - ] - }, - { - "title": "Serializable Transactions", - "urls": [ - "/${VERSION}/demo-serializable.html" - ] - }, - { - "title": "Spatial Data", - "urls": [ - "/${VERSION}/spatial-tutorial.html" - ] - }, - { - "title": "Cross-Cloud Migration", - "urls": [ - "/${VERSION}/demo-automatic-cloud-migration.html" - ] - }, - { - "title": "JSON Support", - "urls": [ - "/${VERSION}/demo-json-support.html" - ] - } - ] - } - ] -} diff --git a/src/current/_includes/v22.1/sidebar-data/develop.json b/src/current/_includes/v22.1/sidebar-data/develop.json deleted file mode 100644 index 58e233b31f8..00000000000 --- a/src/current/_includes/v22.1/sidebar-data/develop.json +++ /dev/null @@ -1,507 +0,0 @@ -{ - "title": "Develop", - "is_top_level": true, - "items": [ - { - "title": "Developer Guide Overview", - "urls": [ - "/${VERSION}/developer-guide-overview.html" - ] - }, - { - "title": "Connect to CockroachDB", - "items": [ - { - "title": "Install a Driver or ORM Framework", - "urls": [ - "/${VERSION}/install-client-drivers.html" - ] - }, - { - "title": "Connect to a Cluster", - "urls": [ - "/${VERSION}/connect-to-the-database.html" - ] - }, - { - "title": "Use Connection Pools", - "urls": [ - "/${VERSION}/connection-pooling.html" - ] - } - ] - }, - { - "title": "Design a Database Schema", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/schema-design-overview.html" - ] - }, - { - "title": "Create a Database", - "urls": [ - "/${VERSION}/schema-design-database.html" - ] - }, - { - "title": "Create a User-defined Schema", - "urls": [ - "/${VERSION}/schema-design-schema.html" - ] - }, - { - "title": "Create a Table", - "urls": [ - "/${VERSION}/schema-design-table.html" - ] - }, - { - "title": "Secondary Indexes", - "urls": [ - "/${VERSION}/schema-design-indexes.html" - ] - }, - { - "title": "Update a Database Schema", - "items": [ - { - "title": "Change and Remove Objects", - "urls": [ - "/${VERSION}/schema-design-update.html" - ] - }, - { - "title": "Online Schema Changes", - "urls": [ - "/${VERSION}/online-schema-changes.html" - ] - } - ] - }, - { - "title": "Advanced Schema Design", - "items": [ - { - "title": "Computed Columns", - "urls": [ - "/${VERSION}/computed-columns.html" - ] - }, - { - "title": "Group Columns into Families", - "urls": [ - "/${VERSION}/column-families.html" - ] - }, - { - "title": "Index a Subset of Rows", - "urls": [ - "/${VERSION}/partial-indexes.html" - ] - }, - { - "title": "Index Sequential Keys", - "urls": [ - "/${VERSION}/hash-sharded-indexes.html" - ] - }, - { - "title": "Index JSON and Array Data", - "urls": [ - "/${VERSION}/inverted-indexes.html" - ] - }, - { - "title": "Index Expressions", - "urls": [ - "/${VERSION}/expression-indexes.html" - ] - }, - { - "title": "Index Spatial Data", - "urls": [ - "/${VERSION}/spatial-indexes.html" - ] - }, - { - "title": "Scale to Multiple Regions", - "urls": [ - "/${VERSION}/multiregion-scale-application.html" - ] - } - ] - } - ] - }, - { - "title": "Write Data", - "items": [ - { - "title": "Insert Data", - "urls": [ - "/${VERSION}/insert-data.html" - ] - }, - { - "title": "Update Data", - "urls": [ - "/${VERSION}/update-data.html" - ] - }, - { - "title": "Bulk-update Data", - "urls": [ - "/${VERSION}/bulk-update-data.html" - ] - }, - { - "title": "Delete Data", - "urls": [ - "/${VERSION}/delete-data.html" - ] - }, - { - "title": "Bulk-delete Data", - "urls": [ - "/${VERSION}/bulk-delete-data.html" - ] - }, - { - "title": "Batch Delete Expired Data with Row-Level TTL", - "urls": [ - "/${VERSION}/row-level-ttl.html" - ] - } - ] - }, - { - "title": "Read Data", - "items": [ - { - "title": "Select Rows of Data", - "urls": [ - "/${VERSION}/query-data.html" - ] - }, - { - "title": "Reuse Query Results", - "items": [ - { - "title": "Reusable Views", - "urls": [ - "/${VERSION}/views.html" - ] - }, - { - "title": "Subqueries", - "urls": [ - "/${VERSION}/subqueries.html" - ] - } - ] - }, - { - "title": "Temporary Tables", - "urls": [ - "/${VERSION}/temporary-tables.html" - ] - }, - { - "title": "Paginate Results", - "urls": [ - "/${VERSION}/pagination.html" - ] - }, - { - "title": "Follower Reads", - "urls": [ - "/${VERSION}/follower-reads.html" - ] - }, - { - "title": "AS OF SYSTEM TIME", - "urls": [ - "/${VERSION}/as-of-system-time.html" - ] - }, - { - "title": "Query Spatial Data", - "urls": [ - "/${VERSION}/query-spatial-data.html" - ] - } - ] - }, - { - "title": "Transactions", - "items": [ - { - "title": "Transactions Overview", - "urls": [ - "/${VERSION}/transactions.html" - ] - }, - { - "title": "Advanced Client-side Transaction Retries", - "urls": [ - "/${VERSION}/advanced-client-side-transaction-retries.html" - ] - } - ] - }, - { - "title": "Test Your Application Locally", - "urls": [ - "/${VERSION}/local-testing.html" - ] - }, - { - "title": "Troubleshoot Common Problems", - "urls": [ - "/${VERSION}/error-handling-and-troubleshooting.html" - ] - }, - { - "title": "Optimize Statement Performance", - "items": - [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/make-queries-fast.html" - ] - }, - { - "title": "Statement Tuning with EXPLAIN", - "urls": [ - "/${VERSION}/sql-tuning-with-explain.html" - ] - }, - { - "title": "Apply SQL Statement Performance Rules", - "urls": [ - "/${VERSION}/apply-statement-performance-rules.html" - ] - }, - { - "title": "Map SQL Activity using an Application Name", - "urls": [ - "/${VERSION}/map-sql-activity-to-app.html" - ] - }, - { - "title": "SQL Performance Best Practices", - "urls": [ - "/${VERSION}/performance-best-practices-overview.html" - ] - }, - { - "title": "Performance Tuning Recipes", - "urls": [ - "/${VERSION}/performance-recipes.html" - ] - }, - { - "title": "Performance Features", - "items": - [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/performance-features-overview.html" - ] - }, - { - "title": "Indexes", - "urls": [ - "/${VERSION}/indexes.html" - ] - }, - { - "title": "Cost-Based Optimizer", - "urls": [ - "/${VERSION}/cost-based-optimizer.html" - ] - }, - { - "title": "Vectorized Execution Engine", - "urls": [ - "/${VERSION}/vectorized-execution.html" - ] - }, - { - "title": "Load-Based Splitting", - "urls": [ - "/${VERSION}/load-based-splitting.html" - ] - }, - { - "title": "Admission Control", - "urls": [ - "/${VERSION}/admission-control.html" - ] - } - ] - } - ] - }, - { - "title": "Example Applications", - "items": [ - { - "title": "Overview of Example Applications", - "urls": [ - "/${VERSION}/example-apps.html" - ] - }, - { - "title": "Build the Roach Data Application using Spring Boot", - "items": [ - { - "title": "Spring Boot with JDBC", - "urls": [ - "/${VERSION}/build-a-spring-app-with-cockroachdb-jdbc.html" - ] - }, - { - "title": "Spring Boot with JPA", - "urls": [ - "/${VERSION}/build-a-spring-app-with-cockroachdb-jpa.html" - ] - } - ] - }, - { - "title": "The MovR Example Application", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/movr.html" - ] - }, - { - "title": "Global Application", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/movr-flask-overview.html" - ] - }, - { - "title": "Global Application Use Case", - "urls": [ - "/${VERSION}/movr-flask-use-case.html" - ] - }, - { - "title": "Multi-region Database Schema", - "urls": [ - "/${VERSION}/movr-flask-database.html" - ] - }, - { - "title": "Set up a Development Environment", - "urls": [ - "/${VERSION}/movr-flask-setup.html" - ] - }, - { - "title": "Develop a Global Application", - "urls": [ - "/${VERSION}/movr-flask-application.html" - ] - }, - { - "title": "Deploy a Global Application", - "urls": [ - "/${VERSION}/movr-flask-deployment.html" - ] - } - ] - } - ] - }, - { - "title": "Deploy a Python To-Do App with Flask, Kubernetes, and CockroachDB Cloud", - "urls": [ - "/cockroachcloud/deploy-a-python-to-do-app-with-flask-kubernetes-and-cockroachcloud.html" - ] - } - ] - }, - { - "title": "Tutorials", - "items": [ - { - "title": "Schema Migration Tools", - "items": [ - { - "title": "Alembic", - "urls": [ - "/${VERSION}/alembic.html" - ] - }, - { - "title": "Flyway", - "urls": [ - "/${VERSION}/flyway.html" - ] - }, - { - "title": "Liquibase", - "urls": [ - "/${VERSION}/liquibase.html" - ] - } - ] - }, - { - "title": "GUIs & IDEs", - "items": [ - { - "title": "DBeaver GUI", - "urls": [ - "/${VERSION}/dbeaver.html" - ] - }, - { - "title": "IntelliJ IDEA", - "urls": [ - "/${VERSION}/intellij-idea.html" - ] - } - ] - }, - { - "title": "Data Security Tools", - "items": [ - { - "title": "Satori", - "urls": [ - "/${VERSION}/satori-integration.html" - ] - }, - { - "title": "HashiCorp Vault", - "urls": [ - "/${VERSION}/hashicorp-integration.html" - ] - } - ] - } - ] - }, - { - "title": "SQL Playground", - "is_top_level": true, - "urls": [ - "https://www.cockroachlabs.com/docs/tutorials/sql-playground" - ] - } - ] -} diff --git a/src/current/_includes/v22.1/sidebar-data/get-started.json b/src/current/_includes/v22.1/sidebar-data/get-started.json deleted file mode 100644 index c12c6eb6120..00000000000 --- a/src/current/_includes/v22.1/sidebar-data/get-started.json +++ /dev/null @@ -1,167 +0,0 @@ -{ - "title": "Get Started", - "is_top_level": true, - "items": [{ - "title": "Quickstart", - "urls": [ - "/cockroachcloud/quickstart.html" - ] - }, - { - "title": "Learn CockroachDB SQL", - "urls": [ - "/cockroachcloud/learn-cockroachdb-sql.html", - "/${VERSION}/learn-cockroachdb-sql.html" - ] - }, - { - "title": "Build a Sample Application", - "items": [ - { - "title": "JavaScript/TypeScript", - "urls": [ - "/${VERSION}/build-a-nodejs-app-with-cockroachdb.html", - "/${VERSION}/build-a-nodejs-app-with-cockroachdb-sequelize.html", - "/${VERSION}/build-a-nodejs-app-with-cockroachdb-knexjs.html", - "/${VERSION}/build-a-nodejs-app-with-cockroachdb-prisma.html", - "/${VERSION}/build-a-typescript-app-with-cockroachdb.html" - ] - }, - { - "title": "Python", - "urls": [ - "/${VERSION}/build-a-python-app-with-cockroachdb-psycopg3.html", - "/${VERSION}/build-a-python-app-with-cockroachdb.html", - "/${VERSION}/build-a-python-app-with-cockroachdb-sqlalchemy.html", - "/${VERSION}/build-a-python-app-with-cockroachdb-django.html" - ] - }, - { - "title": "Golang", - "urls": [ - "/${VERSION}/build-a-go-app-with-cockroachdb.html", - "/${VERSION}/build-a-go-app-with-cockroachdb-gorm.html", - "/${VERSION}/build-a-go-app-with-cockroachdb-pq.html", - "/${VERSION}/build-a-go-app-with-cockroachdb-upperdb.html" - ] - }, - { - "title": "Java", - "urls": [ - "/${VERSION}/build-a-java-app-with-cockroachdb.html", - "/${VERSION}/build-a-java-app-with-cockroachdb-hibernate.html", - "/${VERSION}/build-a-java-app-with-cockroachdb-jooq.html", - "/${VERSION}/build-a-spring-app-with-cockroachdb-mybatis.html" - ] - }, - { - "title": "Ruby", - "urls": [ - "/${VERSION}/build-a-ruby-app-with-cockroachdb.html", - "/${VERSION}/build-a-ruby-app-with-cockroachdb-activerecord.html" - ] - }, - { - "title": "C# (.NET)", - "urls": [ - "/${VERSION}/build-a-csharp-app-with-cockroachdb.html" - ] - }, - { - "title": "Rust", - "urls": [ - "/${VERSION}/build-a-rust-app-with-cockroachdb.html" - ] - } - ] - }, - { - "title": "Build a Serverless Application", - "items": [ - { - "title": "AWS Lambda", - "urls": [ - "/${VERSION}/deploy-lambda-function.html" - ] - }, - { - "title": "Google Cloud Run", - "urls": [ - "/${VERSION}/deploy-app-gcr.html" - ] - }, - { - "title": "Netlify", - "urls": [ - "/${VERSION}/deploy-app-netlify.html" - ] - }, - { - "title": "Vercel", - "urls": [ - "/${VERSION}/deploy-app-vercel.html" - ] - }, - { - "title": "Serverless Function Best Practices", - "urls": [ - "/${VERSION}/serverless-function-best-practices.html" - ] - } - ] - }, - { - "title": "Glossary", - "urls": [ - "/${VERSION}/architecture/glossary.html" - ] - }, - { - "title": "FAQs", - "items": [ - { - "title": "CockroachDB FAQs", - "urls": [ - "/${VERSION}/frequently-asked-questions.html" - ] - }, - { - "title": "SQL FAQs", - "urls": [ - "/${VERSION}/sql-faqs.html" - ] - }, - { - "title": "Operational FAQs", - "urls": [ - "/${VERSION}/operational-faqs.html" - ] - }, - { - "title": "Availability FAQs", - "urls": [ - "/${VERSION}/multi-active-availability.html" - ] - }, - { - "title": "Licensing FAQs", - "urls": [ - "/${VERSION}/licensing-faqs.html" - ] - }, - { - "title": "Enterprise Features", - "urls": [ - "/${VERSION}/enterprise-licensing.html" - ] - }, - { - "title": "CockroachDB in Comparison", - "urls": [ - "/${VERSION}/cockroachdb-in-comparison.html" - ] - } - ] - } - ] -} diff --git a/src/current/_includes/v22.1/sidebar-data/manage.json b/src/current/_includes/v22.1/sidebar-data/manage.json deleted file mode 100644 index a10227304c5..00000000000 --- a/src/current/_includes/v22.1/sidebar-data/manage.json +++ /dev/null @@ -1,628 +0,0 @@ -{ - "title": "Manage", - "is_top_level": true, - "items": [ - { - "title": "Manage CockroachDB Cloud Clusters", - "items": [ - { - "title": "Manage a CockroachDB Basic Cluster", - "urls": [ - "/cockroachcloud/basic-cluster-management.html" - ] - }, - { - "title": "Manage a CockroachDB Standard Cluster", - "urls": [ - "/cockroachcloud/cluster-management.html" - ] - }, - { - "title": "Manage Billing", - "urls": [ - "/cockroachcloud/billing-management.html" - ] - }, - { - "title": "Use the Cloud API", - "urls": [ - "/cockroachcloud/cloud-api.html" - ] - }, - { - "title": "Use the ccloud command", - "urls": [ - "/cockroachcloud/ccloud-get-started.html" - ] - } - ] - }, - { - "title": "Operate CockroachDB on Kubernetes", - "items": [ - { - "title": "Pod Scheduling", - "urls": [ - "/${VERSION}/schedule-cockroachdb-kubernetes.html" - ] - }, - { - "title": "Resource Management", - "urls": [ - "/${VERSION}/configure-cockroachdb-kubernetes.html" - ] - }, - { - "title": "Certificate Management", - "urls": [ - "/${VERSION}/secure-cockroachdb-kubernetes.html" - ] - }, - { - "title": "Cluster Scaling", - "urls": [ - "/${VERSION}/scale-cockroachdb-kubernetes.html" - ] - }, - { - "title": "Cluster Monitoring", - "urls": [ - "/${VERSION}/monitor-cockroachdb-kubernetes.html" - ] - }, - { - "title": "Cluster Upgrades", - "urls": [ - "/${VERSION}/upgrade-cockroachdb-kubernetes.html" - ] - }, - { - "title": "Optimizing Performance", - "urls": [ - "/${VERSION}/kubernetes-performance.html" - ] - } - ] - }, - { - "title": "Back Up and Restore Data", - "items": [ - { - "title": "Backup and Restore Overview", - "urls": [ - "/${VERSION}/backup-and-restore-overview.html" - ] - }, - { - "title": "Backup Architecture", - "urls": [ - "/${VERSION}/backup-architecture.html" - ] - }, - { - "title": "Back Up and Restore CockroachDB Cloud Clusters", - "items": [ - { - "title": "Take and Restore Customer-Owned Backups", - "urls": [ - "/cockroachcloud/take-and-restore-self-managed-backups.html" - ] - }, - { - "title": "Use Managed-Service Backups", - "urls": [ - "/cockroachcloud/managed-backups.html" - ] - } - ] - }, - { - "title": "Back Up and Restore CockroachDB Self-Hosted Clusters", - "items": [ - { - "title": "Full and Incremental Backups", - "urls": [ - "/${VERSION}/take-full-and-incremental-backups.html" - ] - }, - { - "title": "Backups with Revision History and Point-in-time Restore", - "urls": [ - "/${VERSION}/take-backups-with-revision-history-and-restore-from-a-point-in-time.html" - ] - }, - { - "title": "Encrypted Backup and Restore", - "urls": [ - "/${VERSION}/take-and-restore-encrypted-backups.html" - ] - }, - { - "title": "Locality-aware Backup and Restore", - "urls": [ - "/${VERSION}/take-and-restore-locality-aware-backups.html" - ] - }, - { - "title": "Scheduled Backups", - "urls": [ - "/${VERSION}/manage-a-backup-schedule.html" - ] - } - ] - }, - { - "title": "Restoring Backups Across Versions", - "urls": [ - "/${VERSION}/restoring-backups-across-versions.html" - ] - } - ] - }, - { - "title": "File Storage for Bulk Operations", - "items": [ - { - "title": "Cloud Storage", - "urls": [ - "/${VERSION}/use-cloud-storage-for-bulk-operations.html" - ] - }, - { - "title": "Userfile Storage", - "urls": [ - "/${VERSION}/use-userfile-for-bulk-operations.html" - ] - }, - { - "title": "Local File Server", - "urls": [ - "/${VERSION}/use-a-local-file-server-for-bulk-operations.html" - ] - } - ] - }, - { - "title": "Security", - "items": [ - { - "title": "Secure CockroachDB Cloud Clusters", - "urls": ["/cockroachcloud/security-overview.html"], - "items": [ - { - "title": "Customer-Managed Encryption Keys (CMEK) for CockroachDB Advanced", - "urls": [], - "items": [ - { - "title": "Overview", - "urls": [ - "/cockroachcloud/cmek.html" - ] - }, - { - "title": "Manage CMEK", - "urls": [ - "/cockroachcloud/managing-cmek.html" - ] - } - ] - }, - { - "title": "Authentication", - "items": [ - { - "title": "Authentication Overview", - "urls": [ - "/cockroachcloud/authentication.html" - ] - }, - { - "title": "Single Sign-On (SSO)", - "urls": [ - "/cockroachcloud/cloud-org-sso.html" - ] - }, - { - "title": "Configure Cloud Organization SSO", - "urls": [ - "/cockroachcloud/configure-cloud-org-sso.html" - ] - }, - { - "title": "Certificate Authentication for SQL Clients in CockroachDB Advanced Clusters", - "urls": [ - "/cockroachcloud/client-certs-advanced.html" - ] - } - ] - }, - { - "title": "Network Authorization", - "urls": [ - "/cockroachcloud/network-authorization.html" - ] - }, - { - "title": "SQL Audit Logging", - "urls": [ - "/cockroachcloud/sql-audit-logging.html" - ] - }, - { - "title": "Export Cloud Organization Audit Logs", - "urls": [ - "/cockroachcloud/cloud-org-audit-logs.html" - ] - }, - { - "title": "CockroachDB Cloud Access Management Overview and FAQ", - "urls": [ - "/cockroachcloud/authorization.html" - ] - }, - { - "title": "Managing Access in CockroachDB Cloud", - "urls": [ - "/cockroachcloud/managing-access.html" - ] - } - ] - }, - { - "title": "Secure CockroachDB Self-Hosted Clusters", - "items": [ - { - "title": "Manage Security Certificates", - "items": [ - { - "title": "Use the CockroachDB CLI to provision a development cluster", - "urls": [ - "/${VERSION}/manage-certs-cli.html" - ] - }, - { - "title": "Manage PKI certificates with HashiCorp Vault", - "urls": [ - "/${VERSION}/manage-certs-vault.html" - ] - }, - { - "title": "Create Security Certificates using OpenSSL", - "urls": [ - "/${VERSION}/create-security-certificates-openssl.html" - ] - }, - { - "title": "Use Online Certificate Status Protocol (OCSP)", - "urls": [ - "/${VERSION}/manage-certs-revoke-ocsp.html" - ] - } - ] - }, - { - "title": "Authentication", - "urls": [ - "/${VERSION}/authentication.html" - ] - }, - { - "title": "Encryption", - "urls": [ - "/${VERSION}/encryption.html" - ] - }, - { - "title": "Authorization", - "urls": [ - "/${VERSION}/authorization.html" - ] - }, - { - "title": "SQL Audit Logging", - "urls": [ - "/${VERSION}/sql-audit-logging.html" - ] - }, - { - "title": "GSSAPI Authentication", - "urls": [ - "/${VERSION}/gssapi_authentication.html" - ] - }, - { - "title": "Single Sign-on", - "urls": [ - "/${VERSION}/sso.html" - ] - }, - { - "title": "Rotate Security Certificates", - "urls": [ - "/${VERSION}/rotate-certificates.html" - ] - } - ] - }, - { - "title": "CockroachDB General Security Tutorials", - "items": [ - { - "title": "Configure SQL Authentication for Hardened Serverless Cluster Security", - "urls": [ - "/${VERSION}/security-reference/config-secure-hba.html" - ] - }, - { - "title": "Use Hashicorp Vault's Dynamic Secrets", - "urls": [ - "/${VERSION}/vault-db-secrets-tutorial.html" - ] - } - ] - } - ] - }, - { - "title": "Monitoring and Alerting", - "items": [ - { - "title": "Monitor a CockroachDB Cloud Cluster", - "items": [ - { - "title": "Cluster Overview Page", - "urls": [ - "/cockroachcloud/cluster-overview-page.html" - ] - }, - { - "title": "Alerts Page", - "urls": [ - "/cockroachcloud/alerts-page.html" - ] - }, - { - "title": "Tools Page", - "urls": [ - "/cockroachcloud/tools-page.html" - ] - }, - { - "title": "Statements Page", - "urls": [ - "/cockroachcloud/statements-page.html" - ] - }, - { - "title": "Sessions Page", - "urls": [ - "/cockroachcloud/sessions-page.html" - ] - }, - { - "title": "Transactions Page", - "urls": [ - "/cockroachcloud/transactions-page.html" - ] - }, - { - "title": "Databases Page", - "urls": [ - "/cockroachcloud/databases-page.html" - ] - } - ] - }, - { - "title": "Monitor a CockroachDB Self-Hosted Cluster", - "items": [ - { - "title": "Monitoring Clusters Overview", - "urls": [ - "/${VERSION}/monitoring-and-alerting.html" - ] - }, - { - "title": "Common Issues to Monitor", - "urls": [ - "/${VERSION}/common-issues-to-monitor.html" - ] - }, - { - "title": "Enable the Node Map", - "urls": [ - "/${VERSION}/enable-node-map.html" - ] - }, - { - "title": "Use Prometheus and Alertmanager", - "urls": [ - "/${VERSION}/monitor-cockroachdb-with-prometheus.html" - ] - }, - { - "title": "Cluster API", - "urls": [ - "/${VERSION}/cluster-api.html" - ] - } - ] - }, - { - "title": "Third-Party Monitoring Integrations", - "items": [ - { - "title": "Third-Party Monitoring Integration Overview", - "urls": [ - "/${VERSION}/third-party-monitoring-tools.html" - ] - }, - { - "title": "Monitor CockroachDB {{ site.data.products.core }} with Datadog", - "urls": [ - "/${VERSION}/datadog.html" - ] - }, - { - "title": "Monitor with DBmarlin", - "urls": [ - "/${VERSION}/dbmarlin.html" - ] - }, - { - "title": "Monitor with Kibana", - "urls": [ - "/${VERSION}/kibana.html" - ] - } - ] - } - ] - }, - { - "title": "Logging", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/logging-overview.html" - ] - }, - { - "title": "Configure Logs", - "urls": [ - "/${VERSION}/configure-logs.html" - ] - }, - { - "title": "Logging Use Cases", - "urls": [ - "/${VERSION}/logging-use-cases.html" - ] - }, - - { - "title": "Export Logs From CockroachDB Advanced", - "urls": [ - "/cockroachcloud/export-logs.html" - ] - } - ] - }, - { - "title": "Cluster Maintenance", - "items": [ - { - "title": "Upgrade a Cluster", - "items": [ - { - "title": "Uprade a CockroachDB Cloud Cluster", - "items": [ - { - "title": "Upgrade Policy", - "urls": [ - "/cockroachcloud/upgrade-policy.html" - ] - }, - { - "title": "Upgrade a cluster", - "urls": [ - "/cockroachcloud/upgrade-cockroach-version.html" - ] - } - ] - }, - { - "title": "Upgrade a CockroachDB Self-Hosted Cluster", - "items": [ - { - "title": "Upgrade to CockroachDB {{ page.version.version }}", - "urls": [ - "/${VERSION}/upgrade-cockroach-version.html" - ] - } - ] - } - ] - }, - { - "title": "Manage Long-Running Queries", - "urls": [ - "/${VERSION}/manage-long-running-queries.html" - ] - }, - { - "title": "Node Shutdown", - "urls": [ - "/${VERSION}/node-shutdown.html" - ] - }, - { - "title": "Disaster Recovery", - "urls": [ - "/${VERSION}/disaster-recovery.html" - ] - } - ] - }, - { - "title": "Replication Controls", - "urls": [ - "/${VERSION}/configure-replication-zones.html" - ] - }, - { - "title": "Troubleshooting", - "items": [ - { - "title": "Troubleshooting Overview", - "urls": [ - "/${VERSION}/troubleshooting-overview.html" - ] - }, - { - "title": "Common Errors and Solutions", - "urls": [ - "/${VERSION}/common-errors.html" - ] - }, - { - "title": "Troubleshoot Cluster Setup", - "urls": [ - "/${VERSION}/cluster-setup-troubleshooting.html" - ] - }, - { - "title": "Troubleshoot Statement Behavior", - "urls": [ - "/${VERSION}/query-behavior-troubleshooting.html" - ] - }, - { - "title": "Troubleshoot CockroachDB Cloud", - "urls": [ - "/cockroachcloud/troubleshooting-page.html" - ] - }, - { - "title": "Replication Reports", - "urls": [ - "/${VERSION}/query-replication-reports.html" - ] - }, - { - "title": "Support Resources", - "urls": [ - "/${VERSION}/support-resources.html" - ] - }, - { - "title": "File an Issue", - "urls": [ - "/${VERSION}/file-an-issue.html" - ] - } - ] - } - ] -} diff --git a/src/current/_includes/v22.1/sidebar-data/migrate.json b/src/current/_includes/v22.1/sidebar-data/migrate.json deleted file mode 100644 index 7332c7ba312..00000000000 --- a/src/current/_includes/v22.1/sidebar-data/migrate.json +++ /dev/null @@ -1,77 +0,0 @@ -{ - "title": "Migrate", - "is_top_level": true, - "items": [ - { - "title": "Migration Overview", - "urls": [ - "/${VERSION}/migration-overview.html" - ] - }, - { - "title": "Use the Schema Conversion Tool", - "urls": [ - "/cockroachcloud/migrations-page.html" - ] - }, - { - "title": "Migrate Data to CockroachDB", - "items": [ - { - "title": "Migrate data using AWS DMS", - "urls": [ - "/${VERSION}/aws-dms.html" - ] - }, - { - "title": "Migrate from CSV", - "urls": [ - "/${VERSION}/migrate-from-csv.html" - ] - }, - { - "title": "Migrate from Avro", - "urls": [ - "/${VERSION}/migrate-from-avro.html" - ] - }, - { - "title": "Migrate from Shapefiles", - "urls": [ - "/${VERSION}/migrate-from-shapefiles.html" - ] - }, - { - "title": "Migrate from OpenStreetMap", - "urls": [ - "/${VERSION}/migrate-from-openstreetmap.html" - ] - }, - { - "title": "Migrate from GeoJSON", - "urls": [ - "/${VERSION}/migrate-from-geojson.html" - ] - }, - { - "title": "Migrate from GeoPackage", - "urls": [ - "/${VERSION}/migrate-from-geopackage.html" - ] - }, - { - "title": "Import Performance Best Practices", - "urls": [ - "/${VERSION}/import-performance-best-practices.html" - ] - } - ] - }, - { - "title": "Export Spatial Data", - "urls": [ - "/${VERSION}/export-spatial-data.html" - ] - } - ] -} diff --git a/src/current/_includes/v22.1/sidebar-data/reference.json b/src/current/_includes/v22.1/sidebar-data/reference.json deleted file mode 100644 index 560f98af121..00000000000 --- a/src/current/_includes/v22.1/sidebar-data/reference.json +++ /dev/null @@ -1,1878 +0,0 @@ -{ - "title": "Reference", - "is_top_level": true, - "items": [ - { - "title": "Architecture", - "items": [ - { - "title": "Architecture Overview", - "urls": [ - "/${VERSION}/architecture/overview.html" - ] - }, - { - "title": "SQL Layer", - "urls": [ - "/${VERSION}/architecture/sql-layer.html" - ] - }, - { - "title": "Transaction Layer", - "urls": [ - "/${VERSION}/architecture/transaction-layer.html" - ] - }, - { - "title": "Distribution Layer", - "urls": [ - "/${VERSION}/architecture/distribution-layer.html" - ] - }, - { - "title": "Replication Layer", - "urls": [ - "/${VERSION}/architecture/replication-layer.html" - ] - }, - { - "title": "Storage Layer", - "urls": [ - "/${VERSION}/architecture/storage-layer.html" - ] - }, - { - "title": "Life of a Distributed Transaction", - "urls": [ - "/${VERSION}/architecture/life-of-a-distributed-transaction.html" - ] - }, - { - "title": "Reads and Writes Overview", - "urls": [ - "/${VERSION}/architecture/reads-and-writes-overview.html" - ] - } - ] - }, - { - "title": "SQL", - "items": [ - { - "title": "SQL Overview", - "urls": [ - "/${VERSION}/sql-feature-support.html" - ] - }, - { - "title": "PostgreSQL Compatibility", - "urls": [ - "/${VERSION}/postgresql-compatibility.html" - ] - }, - { - "title": "SQL Syntax", - "items": [ - { - "title": "Full SQL Grammar", - "urls": [ - "/${VERSION}/sql-grammar.html" - ] - }, - { - "title": "Keywords & Identifiers", - "urls": [ - "/${VERSION}/keywords-and-identifiers.html" - ] - }, - { - "title": "Constants", - "urls": [ - "/${VERSION}/sql-constants.html" - ] - }, - { - "title": "Selection Queries", - "urls": [ - "/${VERSION}/selection-queries.html" - ] - }, - { - "title": "Cursors", - "urls": [ - "/${VERSION}/cursors.html" - ] - }, - { - "title": "Table Expressions", - "urls": [ - "/${VERSION}/table-expressions.html" - ] - }, - { - "title": "Common Table Expressions", - "urls": [ - "/${VERSION}/common-table-expressions.html" - ] - }, - { - "title": "Scalar Expressions", - "urls": [ - "/${VERSION}/scalar-expressions.html" - ] - }, - { - "title": "NULL Handling", - "urls": [ - "/${VERSION}/null-handling.html" - ] - } - ] - }, - { - "title": "SQL Statements", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/sql-statements.html" - ] - }, - { - "title": "ADD COLUMN", - "urls": [ - "/${VERSION}/add-column.html" - ] - }, - { - "title": "ADD CONSTRAINT", - "urls": [ - "/${VERSION}/add-constraint.html" - ] - }, - { - "title": "ADD REGION (Enterprise)", - "urls": [ - "/${VERSION}/add-region.html" - ] - }, - { - "title": "ADD SUPER REGION (Enterprise)", - "urls": [ - "/${VERSION}/add-super-region.html" - ] - }, - { - "title": "ALTER BACKUP (Enterprise)", - "urls": [ - "/${VERSION}/alter-backup.html" - ] - }, - { - "title": "ALTER CHANGEFEED (Enterprise)", - "urls": [ - "/${VERSION}/alter-changefeed.html" - ] - }, - { - "title": "ALTER COLUMN", - "urls": [ - "/${VERSION}/alter-column.html" - ] - }, - { - "title": "ALTER DATABASE", - "urls": [ - "/${VERSION}/alter-database.html" - ] - }, - { - "title": "ALTER DEFAULT PRIVILEGES", - "urls": [ - "/${VERSION}/alter-default-privileges.html" - ] - }, - { - "title": "ALTER INDEX", - "urls": [ - "/${VERSION}/alter-index.html" - ] - }, - { - "title": "ALTER PARTITION (Enterprise)", - "urls": [ - "/${VERSION}/alter-partition.html" - ] - }, - { - "title": "ALTER PRIMARY KEY", - "urls": [ - "/${VERSION}/alter-primary-key.html" - ] - }, - { - "title": "ALTER RANGE", - "urls": [ - "/${VERSION}/alter-range.html" - ] - }, - { - "title": "ALTER RANGE ... RELOCATE", - "urls": [ - "/${VERSION}/alter-range-relocate.html" - ] - }, - { - "title": "ALTER ROLE", - "urls": [ - "/${VERSION}/alter-role.html" - ] - }, - { - "title": "ALTER SCHEMA", - "urls": [ - "/${VERSION}/alter-schema.html" - ] - }, - { - "title": "ALTER SEQUENCE", - "urls": [ - "/${VERSION}/alter-sequence.html" - ] - }, - { - "title": "ALTER SUPER REGION (Enterprise)", - "urls": [ - "/${VERSION}/alter-super-region.html" - ] - }, - { - "title": "ALTER TABLE", - "urls": [ - "/${VERSION}/alter-table.html" - ] - }, - { - "title": "ALTER TYPE", - "urls": [ - "/${VERSION}/alter-type.html" - ] - }, - { - "title": "ALTER USER", - "urls": [ - "/${VERSION}/alter-user.html" - ] - }, - { - "title": "ALTER VIEW", - "urls": [ - "/${VERSION}/alter-view.html" - ] - }, - { - "title": "EXPERIMENTAL_AUDIT", - "urls": [ - "/${VERSION}/experimental-audit.html" - ] - }, - { - "title": "BACKUP", - "urls": [ - "/${VERSION}/backup.html" - ] - }, - { - "title": "BEGIN", - "urls": [ - "/${VERSION}/begin-transaction.html" - ] - }, - { - "title": "CANCEL JOB", - "urls": [ - "/${VERSION}/cancel-job.html" - ] - }, - { - "title": "CANCEL QUERY", - "urls": [ - "/${VERSION}/cancel-query.html" - ] - }, - { - "title": "CANCEL SESSION", - "urls": [ - "/${VERSION}/cancel-session.html" - ] - }, - { - "title": "COMMENT ON", - "urls": [ - "/${VERSION}/comment-on.html" - ] - }, - { - "title": "COMMIT", - "urls": [ - "/${VERSION}/commit-transaction.html" - ] - }, - { - "title": "CONFIGURE ZONE", - "urls": [ - "/${VERSION}/configure-zone.html" - ] - }, - { - "title": "COPY FROM", - "urls": [ - "/${VERSION}/copy-from.html" - ] - }, - { - "title": "CREATE CHANGEFEED (Enterprise)", - "urls": [ - "/${VERSION}/create-changefeed.html" - ] - }, - { - "title": "CREATE DATABASE", - "urls": [ - "/${VERSION}/create-database.html" - ] - }, - { - "title": "CREATE INDEX", - "urls": [ - "/${VERSION}/create-index.html" - ] - }, - { - "title": "CREATE ROLE", - "urls": [ - "/${VERSION}/create-role.html" - ] - }, - { - "title": "CREATE SCHEDULE FOR BACKUP", - "urls": [ - "/${VERSION}/create-schedule-for-backup.html" - ] - }, - { - "title": "CREATE SCHEMA", - "urls": [ - "/${VERSION}/create-schema.html" - ] - }, - { - "title": "CREATE SEQUENCE", - "urls": [ - "/${VERSION}/create-sequence.html" - ] - }, - { - "title": "CREATE STATISTICS", - "urls": [ - "/${VERSION}/create-statistics.html" - ] - }, - { - "title": "CREATE TABLE", - "urls": [ - "/${VERSION}/create-table.html" - ] - }, - { - "title": "CREATE TABLE AS", - "urls": [ - "/${VERSION}/create-table-as.html" - ] - }, - { - "title": "CREATE TYPE", - "urls": [ - "/${VERSION}/create-type.html" - ] - }, - { - "title": "CREATE USER", - "urls": [ - "/${VERSION}/create-user.html" - ] - }, - { - "title": "CREATE VIEW", - "urls": [ - "/${VERSION}/create-view.html" - ] - }, - { - "title": "DELETE", - "urls": [ - "/${VERSION}/delete.html" - ] - }, - { - "title": "DROP COLUMN", - "urls": [ - "/${VERSION}/drop-column.html" - ] - }, - { - "title": "DROP CONSTRAINT", - "urls": [ - "/${VERSION}/drop-constraint.html" - ] - }, - { - "title": "DROP DATABASE", - "urls": [ - "/${VERSION}/drop-database.html" - ] - }, - { - "title": "DROP REGION (Enterprise)", - "urls": [ - "/${VERSION}/drop-region.html" - ] - }, - { - "title": "DROP SUPER REGION (Enterprise)", - "urls": [ - "/${VERSION}/drop-super-region.html" - ] - }, - { - "title": "DROP TYPE", - "urls": [ - "/${VERSION}/drop-type.html" - ] - }, - { - "title": "DROP INDEX", - "urls": [ - "/${VERSION}/drop-index.html" - ] - }, - { - "title": "DROP ROLE", - "urls": [ - "/${VERSION}/drop-role.html" - ] - }, - { - "title": "DROP SCHEDULES", - "urls": [ - "/${VERSION}/drop-schedules.html" - ] - }, - { - "title": "DROP SCHEMA", - "urls": [ - "/${VERSION}/drop-schema.html" - ] - }, - { - "title": "DROP SEQUENCE", - "urls": [ - "/${VERSION}/drop-sequence.html" - ] - }, - { - "title": "DROP TABLE", - "urls": [ - "/${VERSION}/drop-table.html" - ] - }, - { - "title": "DROP USER", - "urls": [ - "/${VERSION}/drop-user.html" - ] - }, - { - "title": "DROP VIEW", - "urls": [ - "/${VERSION}/drop-view.html" - ] - }, - { - "title": "EXPERIMENTAL CHANGEFEED FOR", - "urls": [ - "/${VERSION}/changefeed-for.html" - ] - }, - { - "title": "EXPLAIN", - "urls": [ - "/${VERSION}/explain.html" - ] - }, - { - "title": "EXPLAIN ANALYZE", - "urls": [ - "/${VERSION}/explain-analyze.html" - ] - }, - { - "title": "EXPORT", - "urls": [ - "/${VERSION}/export.html" - ] - }, - { - "title": "GRANT", - "urls": [ - "/${VERSION}/grant.html" - ] - }, - { - "title": "IMPORT", - "urls": [ - "/${VERSION}/import.html" - ] - }, - { - "title": "IMPORT INTO", - "urls": [ - "/${VERSION}/import-into.html" - ] - }, - { - "title": "INSERT", - "urls": [ - "/${VERSION}/insert.html" - ] - }, - { - "title": "JOIN", - "urls": [ - "/${VERSION}/joins.html" - ] - }, - { - "title": "LIMIT/OFFSET", - "urls": [ - "/${VERSION}/limit-offset.html" - ] - }, - { - "title": "ORDER BY", - "urls": [ - "/${VERSION}/order-by.html" - ] - }, - { - "title": "OWNER TO", - "urls": [ - "/${VERSION}/owner-to.html" - ] - }, - { - "title": "PARTITION BY (Enterprise)", - "urls": [ - "/${VERSION}/partition-by.html" - ] - }, - { - "title": "PAUSE JOB", - "urls": [ - "/${VERSION}/pause-job.html" - ] - }, - { - "title": "PAUSE SCHEDULES", - "urls": [ - "/${VERSION}/pause-schedules.html" - ] - }, - { - "title": "PLACEMENT (RESTRICTED | DEFAULT)", - "urls": [ - "/${VERSION}/placement-restricted.html" - ] - }, - { - "title": "REASSIGN OWNED", - "urls": [ - "/${VERSION}/reassign-owned.html" - ] - }, - { - "title": "REFRESH", - "urls": [ - "/${VERSION}/refresh.html" - ] - }, - { - "title": "RENAME COLUMN", - "urls": [ - "/${VERSION}/rename-column.html" - ] - }, - { - "title": "RENAME CONSTRAINT", - "urls": [ - "/${VERSION}/rename-constraint.html" - ] - }, - { - "title": "RENAME DATABASE", - "urls": [ - "/${VERSION}/rename-database.html" - ] - }, - { - "title": "RENAME INDEX", - "urls": [ - "/${VERSION}/rename-index.html" - ] - }, - { - "title": "RENAME TABLE", - "urls": [ - "/${VERSION}/rename-table.html" - ] - }, - { - "title": "RELEASE SAVEPOINT", - "urls": [ - "/${VERSION}/release-savepoint.html" - ] - }, - { - "title": "RESET CLUSTER SETTING", - "urls": [ - "/${VERSION}/reset-cluster-setting.html" - ] - }, - { - "title": "RESET {session variable}", - "urls": [ - "/${VERSION}/reset-vars.html" - ] - }, - { - "title": "RESET {storage parameter}", - "urls": [ - "/${VERSION}/reset-storage-parameter.html" - ] - }, - { - "title": "RESTORE", - "urls": [ - "/${VERSION}/restore.html" - ] - }, - { - "title": "RESUME JOB", - "urls": [ - "/${VERSION}/resume-job.html" - ] - }, - { - "title": "RESUME SCHEDULES", - "urls": [ - "/${VERSION}/resume-schedules.html" - ] - }, - { - "title": "REVOKE", - "urls": [ - "/${VERSION}/revoke.html" - ] - }, - { - "title": "ROLLBACK", - "urls": [ - "/${VERSION}/rollback-transaction.html" - ] - }, - { - "title": "SAVEPOINT", - "urls": [ - "/${VERSION}/savepoint.html" - ] - }, - { - "title": "SELECT", - "urls": [ - "/${VERSION}/select-clause.html" - ] - }, - { - "title": "SELECT FOR UPDATE", - "urls": [ - "/${VERSION}/select-for-update.html" - ] - }, - { - "title": "SET CLUSTER SETTING", - "urls": [ - "/${VERSION}/set-cluster-setting.html" - ] - }, - { - "title": "SET {session variable}", - "urls": [ - "/${VERSION}/set-vars.html" - ] - }, - { - "title": "SET {storage parameter}", - "urls": [ - "/${VERSION}/set-storage-parameter.html" - ] - }, - { - "title": "SET LOCALITY", - "urls": [ - "/${VERSION}/set-locality.html" - ] - }, - { - "title": "SET PRIMARY REGION (Enterprise)", - "urls": [ - "/${VERSION}/set-primary-region.html" - ] - }, - { - "title": "SET SCHEMA", - "urls": [ - "/${VERSION}/set-schema.html" - ] - }, - { - "title": "SET TRANSACTION", - "urls": [ - "/${VERSION}/set-transaction.html" - ] - }, - { - "title": "SHOW BACKUP", - "urls": [ - "/${VERSION}/show-backup.html" - ] - }, - { - "title": "SHOW CLUSTER SETTING", - "urls": [ - "/${VERSION}/show-cluster-setting.html" - ] - }, - { - "title": "SHOW COLUMNS", - "urls": [ - "/${VERSION}/show-columns.html" - ] - }, - { - "title": "SHOW CONSTRAINTS", - "urls": [ - "/${VERSION}/show-constraints.html" - ] - }, - { - "title": "SHOW CREATE", - "urls": [ - "/${VERSION}/show-create.html" - ] - }, - { - "title": "SHOW CREATE SCHEDULE", - "urls": [ - "/${VERSION}/show-create-schedule.html" - ] - }, - { - "title": "SHOW DATABASES", - "urls": [ - "/${VERSION}/show-databases.html" - ] - }, - { - "title": "SHOW DEFAULT PRIVILEGES", - "urls": [ - "/${VERSION}/show-default-privileges.html" - ] - }, - { - "title": "SHOW ENUMS", - "urls": [ - "/${VERSION}/show-enums.html" - ] - }, - { - "title": "SHOW FULL TABLE SCANS", - "urls": [ - "/${VERSION}/show-full-table-scans.html" - ] - }, - { - "title": "SHOW GRANTS", - "urls": [ - "/${VERSION}/show-grants.html" - ] - }, - { - "title": "SHOW INDEX", - "urls": [ - "/${VERSION}/show-index.html" - ] - }, - { - "title": "SHOW JOBS", - "urls": [ - "/${VERSION}/show-jobs.html" - ] - }, - { - "title": "SHOW LOCALITY", - "urls": [ - "/${VERSION}/show-locality.html" - ] - }, - { - "title": "SHOW PARTITIONS (Enterprise)", - "urls": [ - "/${VERSION}/show-partitions.html" - ] - }, - { - "title": "SHOW RANGES", - "urls": [ - "/${VERSION}/show-ranges.html" - ] - }, - { - "title": "SHOW RANGE FOR ROW", - "urls": [ - "/${VERSION}/show-range-for-row.html" - ] - }, - { - "title": "SHOW REGIONS", - "urls": [ - "/${VERSION}/show-regions.html" - ] - }, - { - "title": "SHOW {session variable}", - "urls": [ - "/${VERSION}/show-vars.html" - ] - }, - { - "title": "SHOW SUPER REGIONS", - "urls": [ - "/${VERSION}/show-super-regions.html" - ] - }, - { - "title": "SHOW ROLES", - "urls": [ - "/${VERSION}/show-roles.html" - ] - }, - { - "title": "SHOW SCHEDULES", - "urls": [ - "/${VERSION}/show-schedules.html" - ] - }, - { - "title": "SHOW SCHEMAS", - "urls": [ - "/${VERSION}/show-schemas.html" - ] - }, - { - "title": "SHOW SEQUENCES", - "urls": [ - "/${VERSION}/show-sequences.html" - ] - }, - { - "title": "SHOW SESSIONS", - "urls": [ - "/${VERSION}/show-sessions.html" - ] - }, - { - "title": "SHOW STATEMENTS", - "urls": [ - "/${VERSION}/show-statements.html" - ] - }, - { - "title": "SHOW STATISTICS", - "urls": [ - "/${VERSION}/show-statistics.html" - ] - }, - { - "title": "SHOW SAVEPOINT STATUS", - "urls": [ - "/${VERSION}/show-savepoint-status.html" - ] - }, - { - "title": "SHOW TABLES", - "urls": [ - "/${VERSION}/show-tables.html" - ] - }, - { - "title": "SHOW TRACE FOR SESSION", - "urls": [ - "/${VERSION}/show-trace.html" - ] - }, - { - "title": "SHOW TRANSACTIONS", - "urls": [ - "/${VERSION}/show-transactions.html" - ] - }, - { - "title": "SHOW TYPES", - "urls": [ - "/${VERSION}/show-types.html" - ] - }, - { - "title": "SHOW USERS", - "urls": [ - "/${VERSION}/show-users.html" - ] - }, - { - "title": "SHOW ZONE CONFIGURATIONS", - "urls": [ - "/${VERSION}/show-zone-configurations.html" - ] - }, - { - "title": "SPLIT AT", - "urls": [ - "/${VERSION}/split-at.html" - ] - }, - { - "title": "SURVIVE {ZONE,REGION} FAILURE", - "urls": [ - "/${VERSION}/survive-failure.html" - ] - }, - { - "title": "TRUNCATE", - "urls": [ - "/${VERSION}/truncate.html" - ] - }, - { - "title": "UNSPLIT AT", - "urls": [ - "/${VERSION}/unsplit-at.html" - ] - }, - { - "title": "UPDATE", - "urls": [ - "/${VERSION}/update.html" - ] - }, - { - "title": "UPSERT", - "urls": [ - "/${VERSION}/upsert.html" - ] - }, - { - "title": "VALIDATE CONSTRAINT", - "urls": [ - "/${VERSION}/validate-constraint.html" - ] - }, - { - "title": "WITH {storage parameter}", - "urls": [ - "/${VERSION}/with-storage-parameter.html" - ] - } - ] - }, - { - "title": "Data Types", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/data-types.html" - ] - }, - { - "title": "ARRAY", - "urls": [ - "/${VERSION}/array.html" - ] - }, - { - "title": "BIT", - "urls": [ - "/${VERSION}/bit.html" - ] - }, - { - "title": "BOOL", - "urls": [ - "/${VERSION}/bool.html" - ] - }, - { - "title": "BYTES", - "urls": [ - "/${VERSION}/bytes.html" - ] - }, - { - "title": "COLLATE", - "urls": [ - "/${VERSION}/collate.html" - ] - }, - { - "title": "DATE", - "urls": [ - "/${VERSION}/date.html" - ] - }, - { - "title": "DECIMAL", - "urls": [ - "/${VERSION}/decimal.html" - ] - }, - { - "title": "ENUM", - "urls": [ - "/${VERSION}/enum.html" - ] - }, - { - "title": "FLOAT", - "urls": [ - "/${VERSION}/float.html" - ] - }, - { - "title": "INET", - "urls": [ - "/${VERSION}/inet.html" - ] - }, - { - "title": "INT", - "urls": [ - "/${VERSION}/int.html" - ] - }, - { - "title": "INTERVAL", - "urls": [ - "/${VERSION}/interval.html" - ] - }, - { - "title": "JSONB", - "urls": [ - "/${VERSION}/jsonb.html" - ] - }, - { - "title": "OID", - "urls": [ - "/${VERSION}/oid.html" - ] - }, - { - "title": "SERIAL", - "urls": [ - "/${VERSION}/serial.html" - ] - }, - { - "title": "STRING", - "urls": [ - "/${VERSION}/string.html" - ] - }, - { - "title": "TIME", - "urls": [ - "/${VERSION}/time.html" - ] - }, - { - "title": "TIMESTAMP", - "urls": [ - "/${VERSION}/timestamp.html" - ] - }, - { - "title": "UUID", - "urls": [ - "/${VERSION}/uuid.html" - ] - } - ] - }, - { - "title": "Constraints", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/constraints.html" - ] - }, - { - "title": "Check", - "urls": [ - "/${VERSION}/check.html" - ] - }, - { - "title": "Default Value", - "urls": [ - "/${VERSION}/default-value.html" - ] - }, - { - "title": "Foreign Key", - "urls": [ - "/${VERSION}/foreign-key.html" - ] - }, - { - "title": "Not Null", - "urls": [ - "/${VERSION}/not-null.html" - ] - }, - { - "title": "Primary Key", - "urls": [ - "/${VERSION}/primary-key.html" - ] - }, - { - "title": "Unique", - "urls": [ - "/${VERSION}/unique.html" - ] - } - ] - }, - { - "title": "Functions and Operators", - "urls": [ - "/${VERSION}/functions-and-operators.html" - ] - }, - { - "title": "Window Functions", - "urls": [ - "/${VERSION}/window-functions.html" - ] - }, - { - "title": "Name Resolution", - "urls": [ - "/${VERSION}/sql-name-resolution.html" - ] - }, - { - "title": "System Catalogs", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/system-catalogs.html" - ] - }, - { - "title": "crdb_internal", - "urls": [ - "/${VERSION}/crdb-internal.html" - ] - }, - { - "title": "information_schema", - "urls": [ - "/${VERSION}/information-schema.html" - ] - }, - { - "title": "pg_catalog", - "urls": [ - "/${VERSION}/pg-catalog.html" - ] - }, - { - "title": "pg_extension", - "urls": [ - "/${VERSION}/pg-extension.html" - ] - } - ] - }, - { - "title": "Spatial Features", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/spatial-features.html" - ] - }, - { - "title": "Work with Spatial Data", - "urls": [ - "/${VERSION}/spatial-data.html" - ] - }, - { - "title": "Spatial and GIS Glossary", - "urls": [ - "/${VERSION}/spatial-glossary.html" - ] - }, - { - "title": "POINT", - "urls": [ - "/${VERSION}/point.html" - ] - }, - { - "title": "LINESTRING", - "urls": [ - "/${VERSION}/linestring.html" - ] - }, - { - "title": "POLYGON", - "urls": [ - "/${VERSION}/polygon.html" - ] - }, - { - "title": "MULTIPOINT", - "urls": [ - "/${VERSION}/multipoint.html" - ] - }, - { - "title": "MULTILINESTRING", - "urls": [ - "/${VERSION}/multilinestring.html" - ] - }, - { - "title": "MULTIPOLYGON", - "urls": [ - "/${VERSION}/multipolygon.html" - ] - }, - { - "title": "GEOMETRYCOLLECTION", - "urls": [ - "/${VERSION}/geometrycollection.html" - ] - }, - { - "title": "Well Known Text (WKT)", - "urls": [ - "/${VERSION}/well-known-text.html" - ] - }, - { - "title": "Well Known Binary (WKB)", - "urls": [ - "/${VERSION}/well-known-binary.html" - ] - }, - { - "title": "GeoJSON", - "urls": [ - "/${VERSION}/geojson.html" - ] - }, - { - "title": "SRID 4326 - longitude and latitude", - "urls": [ - "/${VERSION}/srid-4326.html" - ] - }, - { - "title": "ST_Contains", - "urls": [ - "/${VERSION}/st_contains.html" - ] - }, - { - "title": "ST_Within", - "urls": [ - "/${VERSION}/st_within.html" - ] - }, - { - "title": "ST_Intersects", - "urls": [ - "/${VERSION}/st_intersects.html" - ] - }, - { - "title": "ST_CoveredBy", - "urls": [ - "/${VERSION}/st_coveredby.html" - ] - }, - { - "title": "ST_Covers", - "urls": [ - "/${VERSION}/st_covers.html" - ] - }, - { - "title": "ST_Disjoint", - "urls": [ - "/${VERSION}/st_disjoint.html" - ] - }, - { - "title": "ST_Equals", - "urls": [ - "/${VERSION}/st_equals.html" - ] - }, - { - "title": "ST_Overlaps", - "urls": [ - "/${VERSION}/st_overlaps.html" - ] - }, - { - "title": "ST_Touches", - "urls": [ - "/${VERSION}/st_touches.html" - ] - }, - { - "title": "ST_ConvexHull", - "urls": [ - "/${VERSION}/st_convexhull.html" - ] - }, - { - "title": "ST_Union", - "urls": [ - "/${VERSION}/st_union.html" - ] - } - ] - } - ] - }, - { - "title": "Cluster Settings", - "urls": [ - "/${VERSION}/cluster-settings.html" - ] - }, - { - "title": "Security", - "items": [ - { - "title": "Security Overview", - "urls": [ - "/${VERSION}/security-reference/security-overview.html" - ] - }, - { - "title": "Authentication", - "urls": [ - "/${VERSION}/security-reference/authentication.html" - ], - "items": [ - { - "title": "SASL/SCRAM-SHA-256 Secure Password-based Authentication", - "urls": [ "/${VERSION}/security-reference/scram-authentication.html" ] - } - ] - }, - { - "title": "Authorization", - "urls": [ - "/${VERSION}/security-reference/authorization.html" - ] - }, - { - "title": "Encryption", - "urls": [ - "/${VERSION}/security-reference/encryption.html" - ] - }, - { - "title": "Public Key Infrastructure (PKI) and Transport Layer Security (TLS)", - "urls": [ - "/${VERSION}/security-reference/transport-layer-security.html" - ] - }, - { - "title": "Customer-Managed Encryption Keys (CMEK)", - "urls": [ - "/cockroachcloud/cmek.html" - ] - }, - { - "title": "Payment Card Industry Data Security Standard (PCI DSS)", - "urls": [ - "/cockroachcloud/pci-dss.html" - ] - } - ] - }, - { - "title": "CLI", - "items": [ - { - "title": "cockroach Commands Overview", - "urls": [ - "/${VERSION}/cockroach-commands.html" - ] - }, - { - "title": "Client Connection Parameters", - "urls": [ - "/${VERSION}/connection-parameters.html" - ] - }, - { - "title": "cockroach Commands", - "items": [ - { - "title": "cockroach start", - "urls": [ - "/${VERSION}/cockroach-start.html" - ] - }, - { - "title": "cockroach init", - "urls": [ - "/${VERSION}/cockroach-init.html" - ] - }, - { - "title": "cockroach start-single-node", - "urls": [ - "/${VERSION}/cockroach-start-single-node.html" - ] - }, - { - "title": "cockroach cert", - "urls": [ - "/${VERSION}/cockroach-cert.html" - ] - }, - { - "title": "cockroach sql", - "urls": [ - "/${VERSION}/cockroach-sql.html" - ] - }, - { - "title": "cockroach sqlfmt", - "urls": [ - "/${VERSION}/cockroach-sqlfmt.html" - ] - }, - { - "title": "cockroach node", - "urls": [ - "/${VERSION}/cockroach-node.html" - ] - }, - { - "title": "cockroach nodelocal upload", - "urls": [ - "/${VERSION}/cockroach-nodelocal-upload.html" - ] - }, - { - "title": "cockroach auth-session", - "urls": [ - "/${VERSION}/cockroach-auth-session.html" - ] - }, - { - "title": "cockroach demo", - "urls": [ - "/${VERSION}/cockroach-demo.html" - ] - }, - { - "title": "cockroach debug ballast", - "urls": [ - "/${VERSION}/cockroach-debug-ballast.html" - ] - }, - { - "title": "cockroach debug encryption-active-key", - "urls": [ - "/${VERSION}/cockroach-debug-encryption-active-key.html" - ] - }, - { - "title": "cockroach debug job-trace", - "urls": [ - "/${VERSION}/cockroach-debug-job-trace.html" - ] - }, - { - "title": "cockroach debug list-files", - "urls": [ - "/${VERSION}/cockroach-debug-list-files.html" - ] - }, - { - "title": "cockroach debug merge-logs", - "urls": [ - "/${VERSION}/cockroach-debug-merge-logs.html" - ] - }, - { - "title": "cockroach debug tsdump", - "urls": [ - "/${VERSION}/cockroach-debug-tsdump.html" - ] - }, - { - "title": "cockroach debug zip", - "urls": [ - "/${VERSION}/cockroach-debug-zip.html" - ] - }, - { - "title": "cockroach statement-diag", - "urls": [ - "/${VERSION}/cockroach-statement-diag.html" - ] - }, - { - "title": "cockroach gen", - "urls": [ - "/${VERSION}/cockroach-gen.html" - ] - }, - { - "title": "cockroach userfile upload", - "urls": [ - "/${VERSION}/cockroach-userfile-upload.html" - ] - }, - { - "title": "cockroach userfile list", - "urls": [ - "/${VERSION}/cockroach-userfile-list.html" - ] - }, - { - "title": "cockroach userfile get", - "urls": [ - "/${VERSION}/cockroach-userfile-get.html" - ] - }, - { - "title": "cockroach userfile delete", - "urls": [ - "/${VERSION}/cockroach-userfile-delete.html" - ] - }, - { - "title": "cockroach version", - "urls": [ - "/${VERSION}/cockroach-version.html" - ] - }, - { - "title": "cockroach workload", - "urls": [ - "/${VERSION}/cockroach-workload.html" - ] - }, - { - "title": "cockroach import", - "urls": [ - "/${VERSION}/cockroach-import.html" - ] - } - ] - }, - { - "title": "The cockroach-sql command", - "urls": [ - "/${VERSION}/cockroach-sql-binary.html" - ] - } - ] - }, - { - "title": "DB Console", - "items": [ - { - "title": "DB Console Overview", - "urls": [ - "/${VERSION}/ui-overview.html" - ] - }, - { - "title": "Cluster Overview Page", - "urls": [ - "/${VERSION}/ui-cluster-overview-page.html" - ] - }, - { - "title": "Metrics Dashboards", - "items": [ - { - "title": "Overview Dashboard", - "urls": [ - "/${VERSION}/ui-overview-dashboard.html" - ] - }, - { - "title": "Hardware Dashboard", - "urls": [ - "/${VERSION}/ui-hardware-dashboard.html" - ] - }, - { - "title": "Runtime Dashboard", - "urls": [ - "/${VERSION}/ui-runtime-dashboard.html" - ] - }, - { - "title": "SQL Dashboard", - "urls": [ - "/${VERSION}/ui-sql-dashboard.html" - ] - }, - { - "title": "Storage Dashboard", - "urls": [ - "/${VERSION}/ui-storage-dashboard.html" - ] - }, - { - "title": "Replication Dashboard", - "urls": [ - "/${VERSION}/ui-replication-dashboard.html" - ] - }, - { - "title": "Distributed Dashboard", - "urls": [ - "/${VERSION}/ui-distributed-dashboard.html" - ] - }, - { - "title": "Queues Dashboard", - "urls": [ - "/${VERSION}/ui-queues-dashboard.html" - ] - }, - { - "title": "Slow Requests Dashboard", - "urls": [ - "/${VERSION}/ui-slow-requests-dashboard.html" - ] - }, - { - "title": "Changefeeds Dashboard", - "urls": [ - "/${VERSION}/ui-cdc-dashboard.html" - ] - }, - { - "title": "Overload Dashboard", - "urls": [ - "/${VERSION}/ui-overload-dashboard.html" - ] - }, - { - "title": "Custom Chart", - "urls": [ - "/${VERSION}/ui-custom-chart-debug-page.html" - ] - } - ] - }, - { - "title": "Databases Page", - "urls": [ - "/${VERSION}/ui-databases-page.html" - ] - }, - { - "title": "Sessions Page", - "urls": [ - "/${VERSION}/ui-sessions-page.html" - ] - }, - { - "title": "Statements Page", - "urls": [ - "/${VERSION}/ui-statements-page.html" - ] - }, - { - "title": "Transactions Page", - "urls": [ - "/${VERSION}/ui-transactions-page.html" - ] - }, - { - "title": "Network Latency Page", - "urls": [ - "/${VERSION}/ui-network-latency-page.html" - ] - }, - { - "title": "Hot Ranges Page", - "urls": [ - "/${VERSION}/ui-hot-ranges-page.html" - ] - }, - { - "title": "Jobs Page", - "urls": [ - "/${VERSION}/ui-jobs-page.html" - ] - }, - { - "title": "Advanced Debug Page", - "urls": [ - "/${VERSION}/ui-debug-pages.html" - ] - } - ] - }, - { - "title": "Metrics", - "urls": [ - "/${VERSION}/metrics.html" - ] - }, - { - "title": "Transaction Retry Error Reference", - "urls": [ - "/${VERSION}/transaction-retry-error-reference.html" - ] - }, - { - "title": "Cluster API", - "urls": [ - "https://www.cockroachlabs.com/docs/api/cluster/v2" - ] - }, - { - "title": "Cloud API", - "urls": [ - "https://www.cockroachlabs.com/docs/api/cloud/v1" - ] - }, - { - "title": "Logging", - "items": [ - { - "title": "Logging Levels and Channels", - "urls": [ - "/${VERSION}/logging.html" - ] - }, - { - "title": "Log Formats", - "urls": [ - "/${VERSION}/log-formats.html" - ] - }, - { - "title": "Notable Event Types", - "urls": [ - "/${VERSION}/eventlog.html" - ] - } - ] - }, - { - "title": "API Support Policy", - "urls": [ - "/${VERSION}/api-support-policy.html" - ] - }, - { - "title": "Diagnostics Reporting", - "urls": [ - "/${VERSION}/diagnostics-reporting.html" - ] - }, - { - "title": "Benchmarking", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/performance.html" - ] - }, - { - "title": "Benchmarking Instructions", - "urls": [ - "/${VERSION}/performance-benchmarking-with-tpcc-local.html", - "/${VERSION}/performance-benchmarking-with-tpcc-local-multiregion.html", - "/${VERSION}/performance-benchmarking-with-tpcc-small.html", - "/${VERSION}/performance-benchmarking-with-tpcc-medium.html", - "/${VERSION}/performance-benchmarking-with-tpcc-large.html" - ] - } - ] - }, - { - "title": "CockroachDB Feature Availability", - "urls": [ - "/${VERSION}/cockroachdb-feature-availability.html" - ] - }, - { - "title": "Third-Party Support", - "items": [ - { - "title": "Tools Supported by Cockroach Labs", - "urls": [ - "/${VERSION}/third-party-database-tools.html" - ] - }, - { - "title": "Tools Supported by the Community", - "urls": [ - "/${VERSION}/community-tooling.html" - ] - } - ] - } - ] - } diff --git a/src/current/_includes/v22.1/sidebar-data/releases.json b/src/current/_includes/v22.1/sidebar-data/releases.json deleted file mode 100644 index 18f2a1b7c6a..00000000000 --- a/src/current/_includes/v22.1/sidebar-data/releases.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "title": "Releases", - "is_top_level": true, - "items": [ - {% include_cached sidebar-releases.json %} - ] - } diff --git a/src/current/_includes/v22.1/sidebar-data/stream.json b/src/current/_includes/v22.1/sidebar-data/stream.json deleted file mode 100644 index e841f8782e7..00000000000 --- a/src/current/_includes/v22.1/sidebar-data/stream.json +++ /dev/null @@ -1,93 +0,0 @@ -{ - "title": "Stream Data", - "is_top_level": true, - "items": [ - { - "title": "Change Data Capture Overview", - "urls": [ - "/${VERSION}/change-data-capture-overview.html" - ] - }, - { - "title": "Get Started with Changefeeds", - "items": [ - { - "title": "Create and Configure Changefeeds", - "urls": [ - "/${VERSION}/create-and-configure-changefeeds.html" - ] - }, - { - "title": "Changefeed Messages", - "urls": [ - "/${VERSION}/changefeed-messages.html" - ] - }, - { - "title": "Changefeed Sinks", - "urls": [ - "/${VERSION}/changefeed-sinks.html" - ] - }, - { - "title": "Changefeed Examples", - "urls": [ - "/${VERSION}/changefeed-examples.html" - ] - } - ] - }, - { - "title": "Work with Changefeeds", - "items": [ - { - "title": "Changefeeds on Tables with Column Families", - "urls": [ - "/${VERSION}/changefeeds-on-tables-with-column-families.html" - ] - }, - { - "title": "Export Data with Changefeeds", - "urls": [ - "/${VERSION}/export-data-with-changefeeds.html" - ] - }, - { - "title": "Changefeeds in Multi-Region Deployments", - "urls": [ - "/${VERSION}/changefeeds-in-multi-region-deployments.html" - ] - } - ] - }, - { - "title": "Monitor and Debug Changefeeds", - "urls": [ - "/${VERSION}/monitor-and-debug-changefeeds.html" - ] - }, - { - "title": "Tutorials", - "items": [ - { - "title": "Stream a Changefeed from CockroachDB Cloud to Snowflake", - "urls": [ - "/cockroachcloud/stream-changefeed-to-snowflake-aws.html" - ] - }, - { - "title": "Stream a Changefeed to a Confluent Cloud Kafka Cluster", - "urls": [ - "/${VERSION}/stream-a-changefeed-to-a-confluent-cloud-kafka-cluster.html" - ] - } - ] - }, - { - "title": "Advanced Changefeed Configuration", - "urls": [ - "/${VERSION}/advanced-changefeed-configuration.html" - ] - } - ] -} diff --git a/src/current/_includes/v22.1/spatial/ogr2ogr-supported-version.md b/src/current/_includes/v22.1/spatial/ogr2ogr-supported-version.md deleted file mode 100644 index ad444257227..00000000000 --- a/src/current/_includes/v22.1/spatial/ogr2ogr-supported-version.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -An `ogr2ogr` version of 3.1.0 or higher is required to generate data that can be imported into CockroachDB. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/spatial/zmcoords.md b/src/current/_includes/v22.1/spatial/zmcoords.md deleted file mode 100644 index fedbb74e703..00000000000 --- a/src/current/_includes/v22.1/spatial/zmcoords.md +++ /dev/null @@ -1,27 +0,0 @@ - You can also store a `{{page.title}}` with the following additional dimensions: - -- A third dimension coordinate `Z` (`{{page.title}}Z`). -- A measure coordinate `M` (`{{page.title}}M`). -- Both a third dimension and a measure coordinate (`{{page.title}}ZM`). - -The `Z` and `M` dimensions can be accessed or modified using a number of [built-in functions](functions-and-operators.html#spatial-functions), including: - -- `ST_Z` -- `ST_M` -- `ST_Affine` -- `ST_Zmflag` -- `ST_MakePoint` -- `ST_MakePointM` -- `ST_Force3D` -- `ST_Force3DZ` -- `ST_Force3DM` -- `ST_Force4D` -- `ST_Snap` -- `ST_SnapToGrid` -- `ST_RotateZ` -- `ST_AddMeasure` - -Note that CockroachDB's [spatial indexing](spatial-indexes.html) is still based on the 2D coordinate system. This means that: - -- The Z/M dimension is not index accelerated when using spatial predicates. -- Some spatial functions ignore the Z/M dimension, with transformations discarding the Z/M value. diff --git a/src/current/_includes/v22.1/sql/add-size-limits-to-indexed-columns.md b/src/current/_includes/v22.1/sql/add-size-limits-to-indexed-columns.md deleted file mode 100644 index 91cf3d61a1e..00000000000 --- a/src/current/_includes/v22.1/sql/add-size-limits-to-indexed-columns.md +++ /dev/null @@ -1,22 +0,0 @@ -We **strongly recommend** adding size limits to all [indexed columns](indexes.html), which includes columns in [primary keys](primary-key.html). - -Values exceeding 1 MiB can lead to [storage layer write amplification](architecture/storage-layer.html#write-amplification) and cause significant performance degradation or even [crashes due to OOMs (out of memory errors)](cluster-setup-troubleshooting.html#out-of-memory-oom-crash). - -To add a size limit using [`CREATE TABLE`](create-table.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE name (first STRING(100), last STRING(100)); -~~~ - -To add a size limit using [`ALTER TABLE ... ALTER COLUMN`](alter-column.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -SET enable_experimental_alter_column_type_general = true; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER TABLE name ALTER first TYPE STRING(99); -~~~ diff --git a/src/current/_includes/v22.1/sql/begin-transaction-as-of-system-time-example.md b/src/current/_includes/v22.1/sql/begin-transaction-as-of-system-time-example.md deleted file mode 100644 index 7f2c11dac77..00000000000 --- a/src/current/_includes/v22.1/sql/begin-transaction-as-of-system-time-example.md +++ /dev/null @@ -1,19 +0,0 @@ -{% include_cached copy-clipboard.html %} -~~~ sql -> BEGIN AS OF SYSTEM TIME '2019-04-09 18:02:52.0+00:00'; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM products; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> COMMIT; -~~~ diff --git a/src/current/_includes/v22.1/sql/cannot-refresh-materialized-views-inside-transactions.md b/src/current/_includes/v22.1/sql/cannot-refresh-materialized-views-inside-transactions.md deleted file mode 100644 index 95d65f985fa..00000000000 --- a/src/current/_includes/v22.1/sql/cannot-refresh-materialized-views-inside-transactions.md +++ /dev/null @@ -1,27 +0,0 @@ -- CockroachDB cannot refresh {% if page.name == "views.md" %} materialized views {% else %} [materialized views](views.html#materialized-views) {% endif %} inside [explicit transactions](begin-transaction.html). Trying to refresh a materialized view inside an explicit transaction will result in an error. - 1. Start [`cockroach demo`](cockroach-demo.html) with the sample `bank` data set: - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach demo bank - ~~~ - 1. Create the materialized view described in [Usage](views.html#usage). - 1. Start a new multi-statement transaction with [`BEGIN TRANSACTION`](begin-transaction.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - BEGIN TRANSACTION; - ~~~ - 1. Inside the open transaction, attempt to [refresh the view](refresh.html). This will result in an error. - - {% include_cached copy-clipboard.html %} - ~~~ sql - REFRESH MATERIALIZED VIEW overdrawn_accounts; - ~~~ - - ~~~ - ERROR: cannot refresh view in an explicit transaction - SQLSTATE: 25000 - ~~~ - - [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/66008) diff --git a/src/current/_includes/v22.1/sql/combine-alter-table-commands.md b/src/current/_includes/v22.1/sql/combine-alter-table-commands.md deleted file mode 100644 index 62839cce017..00000000000 --- a/src/current/_includes/v22.1/sql/combine-alter-table-commands.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_success}} -This command can be combined with other `ALTER TABLE` commands in a single statement. For a list of commands that can be combined, see [`ALTER TABLE`](alter-table.html). For a demonstration, see [Add and rename columns atomically](rename-column.html#add-and-rename-columns-atomically). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/sql/connection-parameters.md b/src/current/_includes/v22.1/sql/connection-parameters.md deleted file mode 100644 index 769f3c776d6..00000000000 --- a/src/current/_includes/v22.1/sql/connection-parameters.md +++ /dev/null @@ -1,9 +0,0 @@ -Flag | Description ------|------------ - `--url` | A [connection URL](connection-parameters.html#connect-using-a-url) to use instead of the other arguments. To convert a connection URL to the syntax that works with your client driver, run [`cockroach convert-url`](connection-parameters.html#convert-a-url-for-different-drivers).

**Env Variable:** `COCKROACH_URL`
**Default:** no URL -`--host` | The server host and port number to connect to. This can be the address of any node in the cluster.

**Env Variable:** `COCKROACH_HOST`
**Default:** `localhost:26257` -`--port`

`-p` | The server port to connect to. Note: The port number can also be specified via `--host`.

**Env Variable:** `COCKROACH_PORT`
**Default:** `26257` -`--user`

`-u` | The [SQL user](create-user.html) that will own the client session.

**Env Variable:** `COCKROACH_USER`
**Default:** `root` -`--insecure` | Use an insecure connection.

**Env Variable:** `COCKROACH_INSECURE`
**Default:** `false` -`--cert-principal-map` | A comma-separated list of `:` mappings. This allows mapping the principal in a cert to a DB principal such as `node` or `root` or any SQL user. This is intended for use in situations where the certificate management system places restrictions on the `Subject.CommonName` or `SubjectAlternateName` fields in the certificate (e.g., disallowing a `CommonName` like `node` or `root`). If multiple mappings are provided for the same ``, the last one specified in the list takes precedence. A principal not specified in the map is passed through as-is via the identity function. A cert is allowed to authenticate a DB principal if the DB principal name is contained in the mapped `CommonName` or DNS-type `SubjectAlternateName` fields. -`--certs-dir` | The path to the [certificate directory](cockroach-cert.html) containing the CA and client certificates and client key.

**Env Variable:** `COCKROACH_CERTS_DIR`
**Default:** `${HOME}/.cockroach-certs/` \ No newline at end of file diff --git a/src/current/_includes/v22.1/sql/covering-index.md b/src/current/_includes/v22.1/sql/covering-index.md deleted file mode 100644 index 4ce5b00cf12..00000000000 --- a/src/current/_includes/v22.1/sql/covering-index.md +++ /dev/null @@ -1 +0,0 @@ -An index that stores all the columns needed by a query is also known as a _covering index_ for that query. When a query has a covering index, CockroachDB can use that index directly instead of doing an "index join" with the primary index, which is likely to be slower. diff --git a/src/current/_includes/v22.1/sql/crdb-internal-partitions-example.md b/src/current/_includes/v22.1/sql/crdb-internal-partitions-example.md deleted file mode 100644 index 680b0adf261..00000000000 --- a/src/current/_includes/v22.1/sql/crdb-internal-partitions-example.md +++ /dev/null @@ -1,43 +0,0 @@ -## Querying partitions programmatically - -The `crdb_internal.partitions` internal table contains information about the partitions in your database. In testing, scripting, and other programmatic environments, we recommend querying this table for partition information instead of using the `SHOW PARTITIONS` statement. For example, to get all `us_west` partitions of in your database, you can run the following query: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM crdb_internal.partitions WHERE name='us_west'; -~~~ - -~~~ - table_id | index_id | parent_name | name | columns | column_names | list_value | range_value | zone_id | subzone_id -+----------+----------+-------------+---------+---------+--------------+-------------------------------------------------+-------------+---------+------------+ - 53 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 0 | 0 - 54 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 54 | 1 - 54 | 2 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 54 | 2 - 55 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 55 | 1 - 55 | 2 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 55 | 2 - 55 | 3 | NULL | us_west | 1 | vehicle_city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 55 | 3 - 56 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 56 | 1 - 58 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 58 | 1 -(8 rows) -~~~ - -Other internal tables, like `crdb_internal.tables`, include information that could be useful in conjunction with `crdb_internal.partitions`. - -For example, if you want the output for your partitions to include the name of the table and database, you can perform a join of the two tables: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT - partitions.name AS partition_name, column_names, list_value, tables.name AS table_name, database_name - FROM crdb_internal.partitions JOIN crdb_internal.tables ON partitions.table_id=tables.table_id - WHERE tables.name='users'; -~~~ - -~~~ - partition_name | column_names | list_value | table_name | database_name -+----------------+--------------+-------------------------------------------------+------------+---------------+ - us_west | city | ('seattle'), ('san francisco'), ('los angeles') | users | movr - us_east | city | ('new york'), ('boston'), ('washington dc') | users | movr - europe_west | city | ('amsterdam'), ('paris'), ('rome') | users | movr -(3 rows) -~~~ diff --git a/src/current/_includes/v22.1/sql/crdb-internal-partitions.md b/src/current/_includes/v22.1/sql/crdb-internal-partitions.md deleted file mode 100644 index ebab5abe4ed..00000000000 --- a/src/current/_includes/v22.1/sql/crdb-internal-partitions.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_success}} -In testing, scripting, and other programmatic environments, we recommend querying the `crdb_internal.partitions` internal table for partition information instead of using the `SHOW PARTITIONS` statement. For more information, see [Querying partitions programmatically](show-partitions.html#querying-partitions-programmatically). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/sql/cursors-vs-keyset-pagination.md b/src/current/_includes/v22.1/sql/cursors-vs-keyset-pagination.md deleted file mode 100644 index ba5391b5ace..00000000000 --- a/src/current/_includes/v22.1/sql/cursors-vs-keyset-pagination.md +++ /dev/null @@ -1,3 +0,0 @@ -_Cursors_ are stateful objects that use more database resources than keyset pagination, since each cursor holds open a transaction. However, they are easier to use, and make it easier to get consistent results without having to write complex queries from your application logic. They do not require that the results be returned in a particular order (that is, you don't have to include an `ORDER BY` clause), which makes them more flexible. - -_Keyset pagination_ queries are usually much faster than cursors since they order by indexed columns. However, in order to get that performance they require that you return results in some defined order that can be calculated by your application's queries. Because that ordering involves calculating the start/end point of pages of results based on an indexed key, they require more care to write correctly. diff --git a/src/current/_includes/v22.1/sql/db-terms.md b/src/current/_includes/v22.1/sql/db-terms.md deleted file mode 100644 index ecaf4745fc8..00000000000 --- a/src/current/_includes/v22.1/sql/db-terms.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -To avoid confusion with the general term "[database](https://en.wikipedia.org/wiki/Database)", throughout this guide we refer to the logical object as a *database*, to CockroachDB by name, and to a deployment of CockroachDB as a [*cluster*](architecture/glossary.html#cockroachdb-architecture-terms). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/sql/dev-schema-change-limits.md b/src/current/_includes/v22.1/sql/dev-schema-change-limits.md deleted file mode 100644 index f778a483420..00000000000 --- a/src/current/_includes/v22.1/sql/dev-schema-change-limits.md +++ /dev/null @@ -1,3 +0,0 @@ -Review the [limitations of online schema changes](online-schema-changes.html#limitations). CockroachDB [doesn't guarantee the atomicity of schema changes within transactions with multiple statements](online-schema-changes.html#schema-changes-within-transactions). - - Cockroach Labs recommends that you perform schema changes outside explicit transactions. When a database [schema management tool](third-party-database-tools.html#schema-migration-tools) manages transactions on your behalf, include one schema change operation per transaction. diff --git a/src/current/_includes/v22.1/sql/dev-schema-changes.md b/src/current/_includes/v22.1/sql/dev-schema-changes.md deleted file mode 100644 index e6aad1f0361..00000000000 --- a/src/current/_includes/v22.1/sql/dev-schema-changes.md +++ /dev/null @@ -1 +0,0 @@ -Use a [database schema migration tool](third-party-database-tools.html#schema-migration-tools) or the [CockroachDB SQL client](cockroach-sql.html) instead of a [client library](third-party-database-tools.html#drivers) to execute [database schema changes](online-schema-changes.html). diff --git a/src/current/_includes/v22.1/sql/enable-super-region-primary-region-changes.md b/src/current/_includes/v22.1/sql/enable-super-region-primary-region-changes.md deleted file mode 100644 index e58c4ac917d..00000000000 --- a/src/current/_includes/v22.1/sql/enable-super-region-primary-region-changes.md +++ /dev/null @@ -1,23 +0,0 @@ -By default, you may not change the [primary region](set-primary-region.html) of a [multi-region database](multiregion-overview.html) when that region is part of a super region. This is a safety setting designed to prevent you from accidentally moving the data for a [regional table](regional-tables.html) that is meant to be stored in the super region out of that super region, which could break your data domiciling setup. - -If you are sure about what you are doing, you can allow modifying the primary region by setting the `alter_primary_region_super_region_override` [session setting](set-vars.html) to `'on'`: - -{% include_cached copy-clipboard.html %} -~~~ sql -SET alter_primary_region_super_region_override = 'on'; -~~~ - -~~~ -SET -~~~ - -You can also accomplish this by setting the `sql.defaults.alter_primary_region_super_region_override.enable` [cluster setting](cluster-settings.html) to `true`: - -{% include_cached copy-clipboard.html %} -~~~ sql -SET CLUSTER SETTING sql.defaults.alter_primary_region_super_region_override.enable = true; -~~~ - -~~~ -SET CLUSTER SETTING -~~~ diff --git a/src/current/_includes/v22.1/sql/enable-super-regions.md b/src/current/_includes/v22.1/sql/enable-super-regions.md deleted file mode 100644 index 8d6cd8a4080..00000000000 --- a/src/current/_includes/v22.1/sql/enable-super-regions.md +++ /dev/null @@ -1,21 +0,0 @@ -To enable super regions, set the `enable_super_regions` [session setting](set-vars.html) to `'on'`: - -{% include_cached copy-clipboard.html %} -~~~ sql -SET enable_super_regions = 'on'; -~~~ - -~~~ -SET -~~~ - -You can also set the `sql.defaults.super_regions.enabled` [cluster setting](cluster-settings.html) to `true`: - -{% include_cached copy-clipboard.html %} -~~~ sql -SET CLUSTER SETTING sql.defaults.super_regions.enabled = true; -~~~ - -~~~ -SET CLUSTER SETTING -~~~ diff --git a/src/current/_includes/v22.1/sql/expression-indexes-cannot-reference-computed-columns.md b/src/current/_includes/v22.1/sql/expression-indexes-cannot-reference-computed-columns.md deleted file mode 100644 index 4c66aca7d8b..00000000000 --- a/src/current/_includes/v22.1/sql/expression-indexes-cannot-reference-computed-columns.md +++ /dev/null @@ -1,3 +0,0 @@ -CockroachDB does not allow {% if page.name == "expression-indexes.md" %} expression indexes {% else %} [expression indexes](expression-indexes.html) {% endif %} to reference [computed columns](computed-columns.html). - - [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/67900) diff --git a/src/current/_includes/v22.1/sql/expressions-as-on-conflict-targets.md b/src/current/_includes/v22.1/sql/expressions-as-on-conflict-targets.md deleted file mode 100644 index 2b328c1e4f3..00000000000 --- a/src/current/_includes/v22.1/sql/expressions-as-on-conflict-targets.md +++ /dev/null @@ -1,40 +0,0 @@ -CockroachDB does not support expressions as `ON CONFLICT` targets. This means that unique {% if page.name == "expression-indexes.md" %} expression indexes {% else %} [expression indexes](expression-indexes.html) {% endif %} cannot be selected as arbiters for [`INSERT .. ON CONFLICT`](insert.html#on-conflict-clause) statements. For example: - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE t (a INT, b INT, UNIQUE INDEX ((a + b))); -~~~ - -~~~ -CREATE TABLE -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -INSERT INTO t VALUES (1, 2) ON CONFLICT ((a + b)) DO NOTHING; -~~~ - -~~~ -invalid syntax: statement ignored: at or near "(": syntax error -SQLSTATE: 42601 -DETAIL: source SQL: -INSERT INTO t VALUES (1, 2) ON CONFLICT ((a + b)) DO NOTHING - ^ -HINT: try \h INSERT -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -INSERT INTO t VALUES (1, 2) ON CONFLICT ((a + b)) DO UPDATE SET a = 10; -~~~ - -~~~ -invalid syntax: statement ignored: at or near "(": syntax error -SQLSTATE: 42601 -DETAIL: source SQL: -INSERT INTO t VALUES (1, 2) ON CONFLICT ((a + b)) DO UPDATE SET a = 10 - ^ -HINT: try \h INSERT -~~~ - -[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/67893) diff --git a/src/current/_includes/v22.1/sql/function-special-forms.md b/src/current/_includes/v22.1/sql/function-special-forms.md deleted file mode 100644 index b9ac987444a..00000000000 --- a/src/current/_includes/v22.1/sql/function-special-forms.md +++ /dev/null @@ -1,29 +0,0 @@ -| Special form | Equivalent to | -|-----------------------------------------------------------|---------------------------------------------| -| `AT TIME ZONE` | `timezone()` | -| `CURRENT_CATALOG` | `current_catalog()` | -| `COLLATION FOR` | `pg_collation_for()` | -| `CURRENT_DATE` | `current_date()` | -| `CURRENT_ROLE` | `current_user()` | -| `CURRENT_SCHEMA` | `current_schema()` | -| `CURRENT_TIMESTAMP` | `current_timestamp()` | -| `CURRENT_TIME` | `current_time()` | -| `CURRENT_USER` | `current_user()` | -| `EXTRACT( FROM )` | `extract("", )` | -| `EXTRACT_DURATION( FROM )` | `extract_duration("", )` | -| `OVERLAY( PLACING FROM FOR )` | `overlay(, , , )` | -| `OVERLAY( PLACING FROM )` | `overlay(, , )` | -| `POSITION( IN )` | `strpos(, )` | -| `SESSION_USER` | `current_user()` | -| `SUBSTRING( FOR FROM )` | `substring(, , )` | -| `SUBSTRING( FOR )` | `substring(, 1, )` | -| `SUBSTRING( FROM FOR )` | `substring(, , )` | -| `SUBSTRING( FROM )` | `substring(, )` | -| `TRIM( FROM )` | `btrim(, )` | -| `TRIM(, )` | `btrim(, )` | -| `TRIM(FROM )` | `btrim()` | -| `TRIM(LEADING FROM )` | `ltrim(, )` | -| `TRIM(LEADING FROM )` | `ltrim()` | -| `TRIM(TRAILING FROM )` | `rtrim(, )` | -| `TRIM(TRAILING FROM )` | `rtrim()` | -| `USER` | `current_user()` | diff --git a/src/current/_includes/v22.1/sql/generated/diagrams/alter_user_password.html b/src/current/_includes/v22.1/sql/generated/diagrams/alter_user_password.html deleted file mode 100644 index 0e014933d1b..00000000000 --- a/src/current/_includes/v22.1/sql/generated/diagrams/alter_user_password.html +++ /dev/null @@ -1,31 +0,0 @@ -
- - - - -ALTER - - -USER - - -IF - - -EXISTS - - -name - - -WITH - - -PASSWORD - - -password - - - -
diff --git a/src/current/_includes/v22.1/sql/generated/diagrams/create_user.html b/src/current/_includes/v22.1/sql/generated/diagrams/create_user.html deleted file mode 100644 index 1dc78bb289a..00000000000 --- a/src/current/_includes/v22.1/sql/generated/diagrams/create_user.html +++ /dev/null @@ -1,39 +0,0 @@ -
- - - - - - CREATE - - - USER - - - IF - - - NOT - - - EXISTS - - - - name - - - - WITH - - - PASSWORD - - - - password - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v22.1/sql/generated/diagrams/drop_user.html b/src/current/_includes/v22.1/sql/generated/diagrams/drop_user.html deleted file mode 100644 index 57c3db991b9..00000000000 --- a/src/current/_includes/v22.1/sql/generated/diagrams/drop_user.html +++ /dev/null @@ -1,28 +0,0 @@ -
- - - - - - DROP - - - USER - - - IF - - - EXISTS - - - - user_name - - - - , - - - -
\ No newline at end of file diff --git a/src/current/_includes/v22.1/sql/global-table-description.md b/src/current/_includes/v22.1/sql/global-table-description.md deleted file mode 100644 index 5a6292d970b..00000000000 --- a/src/current/_includes/v22.1/sql/global-table-description.md +++ /dev/null @@ -1,7 +0,0 @@ -A _global_ table is optimized for low-latency reads from every region in the database. The tradeoff is that writes will incur higher latencies from any given region, since writes have to be replicated across every region to make the global low-latency reads possible. Use global tables when your application has a "read-mostly" table of reference data that is rarely updated, and needs to be available to all regions. - -For an example of a table that can benefit from the _global_ table locality setting in a multi-region deployment, see the `promo_codes` table from the [MovR application](movr.html). - -For instructions showing how to set a table's locality to `GLOBAL`, see [`ALTER TABLE ... SET LOCALITY`](set-locality.html#global). - -For more information about global tables, including troubleshooting information, see [Global Tables](global-tables.html). diff --git a/src/current/_includes/v22.1/sql/import-default-value.md b/src/current/_includes/v22.1/sql/import-default-value.md deleted file mode 100644 index 4a88ba003fb..00000000000 --- a/src/current/_includes/v22.1/sql/import-default-value.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -Column values cannot be generated by [`DEFAULT`](default-value.html) when importing; an import must include a value for every column specified in the `IMPORT` statement. To use `DEFAULT` values, your file must contain values for the column upon import, or you can [add the column](add-column.html) or [alter the column](alter-column.html#set-or-change-a-default-value) after the table has been imported. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/sql/import-into-default-value.md b/src/current/_includes/v22.1/sql/import-into-default-value.md deleted file mode 100644 index 8c23d6e3de4..00000000000 --- a/src/current/_includes/v22.1/sql/import-into-default-value.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -Column values cannot be generated by [`DEFAULT`](default-value.html) when importing; an import must include a value for every column specified in the `IMPORT INTO` statement. To use `DEFAULT` values, your file must contain values for the column upon import, or you can [add the column](add-column.html) or [alter the column](alter-column.html#set-or-change-a-default-value) after the table has been imported. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/sql/import-into-regional-by-row-table.md b/src/current/_includes/v22.1/sql/import-into-regional-by-row-table.md deleted file mode 100644 index abe9e16abe2..00000000000 --- a/src/current/_includes/v22.1/sql/import-into-regional-by-row-table.md +++ /dev/null @@ -1 +0,0 @@ -`IMPORT` and `IMPORT INTO` cannot directly import data to [`REGIONAL BY ROW`](set-locality.html#regional-by-row) tables that are part of [multi-region databases](multiregion-overview.html). For more information, including a workaround for this limitation, see [Known Limitations](known-limitations.html#import-into-a-regional-by-row-table). diff --git a/src/current/_includes/v22.1/sql/indexes-regional-by-row.md b/src/current/_includes/v22.1/sql/indexes-regional-by-row.md deleted file mode 100644 index 85ceed79d99..00000000000 --- a/src/current/_includes/v22.1/sql/indexes-regional-by-row.md +++ /dev/null @@ -1,3 +0,0 @@ - In [multi-region deployments](multiregion-overview.html), most users should use [`REGIONAL BY ROW` tables](multiregion-overview.html#regional-by-row-tables) instead of explicit index [partitioning](partitioning.html). When you add an index to a `REGIONAL BY ROW` table, it is automatically partitioned on the [`crdb_region` column](set-locality.html#crdb_region). Explicit index partitioning is not required. - - While CockroachDB process an [`ADD REGION`](add-region.html) or [`DROP REGION`](drop-region.html) statement on a particular database, creating or modifying an index will throw an error. Similarly, all [`ADD REGION`](add-region.html) and [`DROP REGION`](drop-region.html) statements will be blocked while an index is being modified on a `REGIONAL BY ROW` table within the same database. diff --git a/src/current/_includes/v22.1/sql/insert-vs-upsert.md b/src/current/_includes/v22.1/sql/insert-vs-upsert.md deleted file mode 100644 index cac251a6012..00000000000 --- a/src/current/_includes/v22.1/sql/insert-vs-upsert.md +++ /dev/null @@ -1,9 +0,0 @@ -When inserting or updating all columns of a table, and the table has no secondary -indexes, Cockroach Labs recommends using an `UPSERT` statement instead of the -equivalent `INSERT ON CONFLICT` statement. Whereas `INSERT ON CONFLICT` always -performs a read to determine the necessary writes, the `UPSERT` statement writes -without reading, making it faster. This may be particularly useful if -you are using a simple SQL table of two columns to [simulate direct KV access](sql-faqs.html#can-i-use-cockroachdb-as-a-key-value-store). -In this case, be sure to use the `UPSERT` statement. - -For tables with secondary indexes, there is no performance difference between `UPSERT` and `INSERT ON CONFLICT`. diff --git a/src/current/_includes/v22.1/sql/inverted-joins.md b/src/current/_includes/v22.1/sql/inverted-joins.md deleted file mode 100644 index d07f5fb48eb..00000000000 --- a/src/current/_includes/v22.1/sql/inverted-joins.md +++ /dev/null @@ -1,97 +0,0 @@ -To run these examples, initialize a demo cluster with the MovR workload. - -{% include {{ page.version.version }}/demo_movr.md %} - -Create a GIN index on the `vehicles` table's `ext` column. - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE INVERTED INDEX idx_vehicle_details ON vehicles(ext); -~~~ - -Check the statement plan for a `SELECT` statement that uses an inner inverted join. - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPLAIN SELECT * FROM vehicles@vehicles_pkey AS v2 INNER INVERTED JOIN vehicles@idx_vehicle_details AS v1 ON v1.ext @> v2.ext; -~~~ - -~~~ - info ---------------------------------------------- - distribution: full - vectorized: true - - • lookup join - │ table: vehicles@vehicles_pkey - │ equality: (city, id) = (city,id) - │ equality cols are key - │ pred: ext @> ext - │ - └── • inverted join - │ table: vehicles@idx_vehicle_details - │ - └── • scan - estimated row count: 3,750 (100% of the table; stats collected 1 hour ago) - table: vehicles@vehicles_pkey - spans: FULL SCAN -(16 rows) -~~~ - -You can omit the `INNER INVERTED JOIN` statement by putting `v1.ext` on the left side of a `@>` join condition in a `WHERE` clause and using an [index hint](table-expressions.html#force-index-selection) for the GIN index. - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPLAIN SELECT * FROM vehicles@idx_vehicle_details AS v1, vehicles AS v2 WHERE v1.ext @> v2.ext; -~~~ - -~~~ - info --------------------------------------------------------------------------------------------- - distribution: full - vectorized: true - - • lookup join - │ table: vehicles@vehicles_pkey - │ equality: (city, id) = (city,id) - │ equality cols are key - │ pred: ext @> ext - │ - └── • inverted join - │ table: vehicles@idx_vehicle_details - │ - └── • scan - estimated row count: 3,750 (100% of the table; stats collected 1 hour ago) - table: vehicles@vehicles_pkey - spans: FULL SCAN -(16 rows) -~~~ - -Use the `LEFT INVERTED JOIN` hint to perform a left inverted join. - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPLAIN SELECT * FROM vehicles AS v2 LEFT INVERTED JOIN vehicles AS v1 ON v1.ext @> v2.ext; -~~~ - -~~~ - info ------------------------------------------------------------------------------------------- - distribution: full - vectorized: true - - • lookup join (left outer) - │ table: vehicles@vehicles_pkey - │ equality: (city, id) = (city,id) - │ equality cols are key - │ pred: ext @> ext - │ - └── • inverted join (left outer) - │ table: vehicles@idx_vehicle_details - │ - └── • scan - estimated row count: 3,750 (100% of the table; stats collected 1 hour ago) - table: vehicles@vehicles_pkey - spans: FULL SCAN -(16 rows) -~~~ diff --git a/src/current/_includes/v22.1/sql/jsonb-comparison.md b/src/current/_includes/v22.1/sql/jsonb-comparison.md deleted file mode 100644 index 41420478c24..00000000000 --- a/src/current/_includes/v22.1/sql/jsonb-comparison.md +++ /dev/null @@ -1,13 +0,0 @@ -You cannot use comparison operators (such as `<` or `>`) on [`JSONB`](jsonb.html) elements. For example, the following query does not work and returns an error: - - {% include_cached copy-clipboard.html %} - ~~~ sql - SELECT '{"a": 1}'::JSONB -> 'a' < '{"b": 2}'::JSONB -> 'b'; - ~~~ - - ~~~ - ERROR: unsupported comparison operator: < - SQLSTATE: 22023 - ~~~ - - [Tracking GitHub issue](https://github.com/cockroachdb/cockroach/issues/49144) diff --git a/src/current/_includes/v22.1/sql/limit-row-size.md b/src/current/_includes/v22.1/sql/limit-row-size.md deleted file mode 100644 index 7a27b3bc979..00000000000 --- a/src/current/_includes/v22.1/sql/limit-row-size.md +++ /dev/null @@ -1,22 +0,0 @@ -## Limit the size of rows - -To help you avoid failures arising from misbehaving applications that bloat the size of rows, you can specify the behavior when a row or individual [column family](column-families.html) larger than a specified size is written to the database. Use the [cluster settings](cluster-settings.html) `sql.guardrails.max_row_size_log` to discover large rows and `sql.guardrails.max_row_size_err` to reject large rows. - -When you write a row that exceeds `sql.guardrails.max_row_size_log`: - -- `INSERT`, `UPSERT`, `UPDATE`, `CREATE TABLE AS`, `CREATE INDEX`, `ALTER TABLE`, `ALTER INDEX`, `IMPORT`, or `RESTORE` statements will log a `LargeRow` to the [`SQL_PERF`](logging.html#sql_perf) channel. -- `SELECT`, `DELETE`, `TRUNCATE`, and `DROP` are not affected. - -When you write a row that exceeds `sql.guardrails.max_row_size_err`: - -- `INSERT`, `UPSERT`, and `UPDATE` statements will fail with a code `54000 (program_limit_exceeded)` error. - -- `CREATE TABLE AS`, `CREATE INDEX`, `ALTER TABLE`, `ALTER INDEX`, `IMPORT`, and `RESTORE` statements will log a `LargeRowInternal` event to the [`SQL_INTERNAL_PERF`](logging.html#sql_internal_perf) channel. - -- `SELECT`, `DELETE`, `TRUNCATE`, and `DROP` are not affected. - -You **cannot** update existing rows that violate the limit unless the update shrinks the size of the -row below the limit. You **can** select, delete, alter, back up, and restore such rows. We -recommend using the accompanying setting `sql.guardrails.max_row_size_log` in conjunction with -`SELECT pg_column_size()` queries to detect and fix any existing large rows before lowering -`sql.guardrails.max_row_size_err`. diff --git a/src/current/_includes/v22.1/sql/locality-optimized-search-limited-records.md b/src/current/_includes/v22.1/sql/locality-optimized-search-limited-records.md deleted file mode 100644 index e23f8f22046..00000000000 --- a/src/current/_includes/v22.1/sql/locality-optimized-search-limited-records.md +++ /dev/null @@ -1 +0,0 @@ -- {% if page.name == "cost-based-optimizer.md" %} Locality optimized search {% else %} [Locality optimized search](cost-based-optimizer.html#locality-optimized-search-in-multi-region-clusters) {% endif %} works only for queries selecting a limited number of records (up to 100,000 unique keys). diff --git a/src/current/_includes/v22.1/sql/locality-optimized-search-virtual-computed-columns.md b/src/current/_includes/v22.1/sql/locality-optimized-search-virtual-computed-columns.md deleted file mode 100644 index 361e422b8a1..00000000000 --- a/src/current/_includes/v22.1/sql/locality-optimized-search-virtual-computed-columns.md +++ /dev/null @@ -1 +0,0 @@ -- {% if page.name == "cost-based-optimizer.md" %} Locality optimized search {% else %} [Locality optimized search](cost-based-optimizer.html#locality-optimized-search-in-multi-region-clusters) {% endif %} does not work for queries that use [partitioned unique indexes](partitioning.html#partition-using-a-secondary-index) on [virtual computed columns](computed-columns.html#virtual-computed-columns). A workaround for computed columns is to make the virtual computed column a [stored computed column](computed-columns.html#stored-computed-columns). Locality optimized search does not work for queries that use partitioned unique [expression indexes](expression-indexes.html). [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/68129) diff --git a/src/current/_includes/v22.1/sql/locality-optimized-search.md b/src/current/_includes/v22.1/sql/locality-optimized-search.md deleted file mode 100644 index 5110cb321fd..00000000000 --- a/src/current/_includes/v22.1/sql/locality-optimized-search.md +++ /dev/null @@ -1 +0,0 @@ -Note that the [SQL engine](../{{ page.version.version }}/architecture/sql-layer.html) will avoid sending requests to nodes in other regions when it can instead read a value from a unique column that is stored locally. This capability is known as [_locality optimized search_](../{{ page.version.version }}/cost-based-optimizer.html#locality-optimized-search-in-multi-region-clusters). diff --git a/src/current/_includes/v22.1/sql/macos-terminal-configuration.md b/src/current/_includes/v22.1/sql/macos-terminal-configuration.md deleted file mode 100644 index e91407ab2e0..00000000000 --- a/src/current/_includes/v22.1/sql/macos-terminal-configuration.md +++ /dev/null @@ -1,14 +0,0 @@ -In **Apple Terminal**: - -1. Navigate to "Preferences", then "Profiles", then "Keyboard". -1. Enable the checkbox "Use Option as Meta Key". - -Apple Terminal Alt key configuration - -In **iTerm2**: - -1. Navigate to "Preferences", then "Profiles", then "Keys". -1. Select the radio button "Esc+" for the behavior of the Left Option Key. - -iTerm2 Alt key configuration - diff --git a/src/current/_includes/v22.1/sql/materialized-views-no-stats.md b/src/current/_includes/v22.1/sql/materialized-views-no-stats.md deleted file mode 100644 index a7b90d3d28e..00000000000 --- a/src/current/_includes/v22.1/sql/materialized-views-no-stats.md +++ /dev/null @@ -1,3 +0,0 @@ -- The optimizer may not select the most optimal query plan when querying materialized views because CockroachDB does not [collect statistics](cost-based-optimizer.html#table-statistics) on materialized views. - - [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/78181). diff --git a/src/current/_includes/v22.1/sql/movr-start-nodes.md b/src/current/_includes/v22.1/sql/movr-start-nodes.md deleted file mode 100644 index 0311fd67ba2..00000000000 --- a/src/current/_includes/v22.1/sql/movr-start-nodes.md +++ /dev/null @@ -1,6 +0,0 @@ -Run [`cockroach demo`](cockroach-demo.html) with the [`--nodes`](cockroach-demo.html#flags) and [`--demo-locality`](cockroach-demo.html#flags) flags This command opens an interactive SQL shell to a temporary, multi-node in-memory cluster with the `movr` database preloaded and set as the [current database](sql-name-resolution.html#current-database). - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach demo --nodes=3 --demo-locality=region=us-east1:region=us-central1:region=us-west1 - ~~~ diff --git a/src/current/_includes/v22.1/sql/movr-start.md b/src/current/_includes/v22.1/sql/movr-start.md deleted file mode 100644 index c0979216eca..00000000000 --- a/src/current/_includes/v22.1/sql/movr-start.md +++ /dev/null @@ -1,63 +0,0 @@ -- Run [`cockroach demo`](cockroach-demo.html) to start a temporary, in-memory cluster with the `movr` dataset preloaded: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach demo - ~~~ - -- Load the `movr` dataset into a persistent local cluster and open an interactive SQL shell: - 1. Start a [secure](secure-a-cluster.html) or [insecure](start-a-local-cluster.html) local cluster. - 1. Use [`cockroach workload`](cockroach-workload.html) to load the `movr` dataset: - -
- - -
- -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload init movr 'postgresql://root@localhost:26257?sslcert=certs%2Fclient.root.crt&sslkey=certs%2Fclient.root.key&sslmode=verify-full&sslrootcert=certs%2Fca.crt' - ~~~ - -
- -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload init movr 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - -
- - 1. Use [`cockroach sql`](cockroach-sql.html) to open an interactive SQL shell and set `movr` as the [current database](sql-name-resolution.html#current-database): - -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --certs-dir=certs --host=localhost:26257 - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > USE movr; - ~~~ - -
- -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=localhost:26257 - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > USE movr; - ~~~ - -
diff --git a/src/current/_includes/v22.1/sql/movr-statements-geo-partitioned-replicas.md b/src/current/_includes/v22.1/sql/movr-statements-geo-partitioned-replicas.md deleted file mode 100644 index ce4824d1c68..00000000000 --- a/src/current/_includes/v22.1/sql/movr-statements-geo-partitioned-replicas.md +++ /dev/null @@ -1,10 +0,0 @@ -### Setup - -The following examples use MovR, a fictional vehicle-sharing application, to demonstrate CockroachDB SQL statements. For more information about the MovR example application and dataset, see [MovR: A Global Vehicle-sharing App](movr.html). - -To follow along, run [`cockroach demo`](cockroach-demo.html) with the `--geo-partitioned-replicas` flag. This command opens an interactive SQL shell to a temporary, 9-node in-memory cluster with the the `movr` database. - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo --geo-partitioned-replicas -~~~ diff --git a/src/current/_includes/v22.1/sql/movr-statements-nodes.md b/src/current/_includes/v22.1/sql/movr-statements-nodes.md deleted file mode 100644 index 4b9eddf612b..00000000000 --- a/src/current/_includes/v22.1/sql/movr-statements-nodes.md +++ /dev/null @@ -1,10 +0,0 @@ -### Setup - -The following examples use MovR, a fictional vehicle-sharing application, to demonstrate CockroachDB SQL statements. For more information about the MovR example application and dataset, see [MovR: A Global Vehicle-sharing App](movr.html). - -To follow along, run [`cockroach demo`](cockroach-demo.html) with the [`--nodes`](cockroach-demo.html#flags) and [`--demo-locality`](cockroach-demo.html#flags) flags. This command opens an interactive SQL shell to a temporary, multi-node in-memory cluster with the `movr` database preloaded and set as the [current database](sql-name-resolution.html#current-database). - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo --nodes=6 --demo-locality=region=us-east,zone=us-east-a:region=us-east,zone=us-east-b:region=us-central,zone=us-central-a:region=us-central,zone=us-central-b:region=us-west,zone=us-west-a:region=us-west,zone=us-west-b -~~~ diff --git a/src/current/_includes/v22.1/sql/movr-statements-partitioning.md b/src/current/_includes/v22.1/sql/movr-statements-partitioning.md deleted file mode 100644 index f45202c335c..00000000000 --- a/src/current/_includes/v22.1/sql/movr-statements-partitioning.md +++ /dev/null @@ -1,10 +0,0 @@ -The following examples use MovR, a fictional vehicle-sharing application, to demonstrate CockroachDB SQL statements. For more information about the MovR example application and dataset, see [MovR: A Global Vehicle-sharing App](movr.html). - -To follow along with the examples below, open a new terminal and run [`cockroach demo`](cockroach-demo.html) with the [`--nodes`](cockroach-demo.html#flags) and [`--demo-locality`](cockroach-demo.html#flags) flags. This command opens an interactive SQL shell to a temporary, multi-node in-memory cluster with the `movr` database preloaded and set as the [current database](sql-name-resolution.html#current-database). - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo \ ---nodes=9 \ ---demo-locality=region=us-east1:region=us-east1:region=us-east1:region=us-central1:region=us-central1:region=us-central1:region=us-west1:region=us-west1:region=us-west1 -~~~ diff --git a/src/current/_includes/v22.1/sql/movr-statements.md b/src/current/_includes/v22.1/sql/movr-statements.md deleted file mode 100644 index f696756213a..00000000000 --- a/src/current/_includes/v22.1/sql/movr-statements.md +++ /dev/null @@ -1,10 +0,0 @@ -### Setup - -The following examples use MovR, a fictional vehicle-sharing application, to demonstrate CockroachDB SQL statements. For more information about the MovR example application and dataset, see [MovR: A Global Vehicle-sharing App](movr.html). - -To follow along, run [`cockroach demo`](cockroach-demo.html) to start a temporary, in-memory cluster with the `movr` dataset preloaded: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo -~~~ diff --git a/src/current/_includes/v22.1/sql/multiregion-example-setup.md b/src/current/_includes/v22.1/sql/multiregion-example-setup.md deleted file mode 100644 index 1069d5d2149..00000000000 --- a/src/current/_includes/v22.1/sql/multiregion-example-setup.md +++ /dev/null @@ -1,26 +0,0 @@ -### Setup - -Only a [cluster region](multiregion-overview.html#cluster-regions) specified [at node startup](cockroach-start.html#locality) can be used as a [database region](multiregion-overview.html#database-regions). - -To follow along with the examples below, start a [demo cluster](cockroach-demo.html) with the [`--global` flag](cockroach-demo.html#general) to simulate a multi-region cluster: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo --global --nodes 9 -~~~ - -To see the regions available to the databases in the cluster, use a `SHOW REGIONS FROM CLUSTER` statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -SHOW REGIONS FROM CLUSTER; -~~~ - -~~~ - region | zones ----------------+---------- - europe-west1 | {b,c,d} - us-east1 | {b,c,d} - us-west1 | {a,b,c} -(3 rows) -~~~ diff --git a/src/current/_includes/v22.1/sql/multiregion-movr-add-regions.md b/src/current/_includes/v22.1/sql/multiregion-movr-add-regions.md deleted file mode 100644 index 180b00899f1..00000000000 --- a/src/current/_includes/v22.1/sql/multiregion-movr-add-regions.md +++ /dev/null @@ -1,8 +0,0 @@ -Execute the following statements. They will tell CockroachDB about the database's regions. This information is necessary so that CockroachDB can later move data around to optimize access to particular data from particular regions. For more information about how this works at a high level, see [Database Regions](multiregion-overview.html#database-regions). - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER DATABASE movr PRIMARY REGION "us-east1"; -ALTER DATABASE movr ADD REGION "europe-west1"; -ALTER DATABASE movr ADD REGION "us-west1"; -~~~ diff --git a/src/current/_includes/v22.1/sql/multiregion-movr-global.md b/src/current/_includes/v22.1/sql/multiregion-movr-global.md deleted file mode 100644 index f0b958b4a5d..00000000000 --- a/src/current/_includes/v22.1/sql/multiregion-movr-global.md +++ /dev/null @@ -1,17 +0,0 @@ -Because the data in `promo_codes` is not updated frequently (a.k.a., "read-mostly"), and needs to be available from any region, the right table locality is [`GLOBAL`](multiregion-overview.html#global-tables). - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER TABLE promo_codes SET locality GLOBAL; -~~~ - -Next, alter the `user_promo_codes` table to have a foreign key into the global `promo_codes` table. This will enable fast reads of the `promo_codes.code` column from any region in the cluster. - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER TABLE user_promo_codes - ADD CONSTRAINT user_promo_codes_code_fk - FOREIGN KEY (code) - REFERENCES promo_codes (code) - ON UPDATE CASCADE; -~~~ diff --git a/src/current/_includes/v22.1/sql/multiregion-movr-regional-by-row.md b/src/current/_includes/v22.1/sql/multiregion-movr-regional-by-row.md deleted file mode 100644 index 70f13f3c10a..00000000000 --- a/src/current/_includes/v22.1/sql/multiregion-movr-regional-by-row.md +++ /dev/null @@ -1,103 +0,0 @@ -All of the tables except `promo_codes` contain rows which are partitioned by region, and updated very frequently. For these tables, the right table locality for optimizing access to their data is [`REGIONAL BY ROW`](multiregion-overview.html#regional-by-row-tables). - -Apply this table locality to the remaining tables. These statements use a `CASE` statement to put data for a given city in the right region and can take around 1 minute to complete for each table. - -- `rides` - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER TABLE rides ADD COLUMN region crdb_internal_region AS ( - CASE WHEN city = 'amsterdam' THEN 'europe-west1' - WHEN city = 'paris' THEN 'europe-west1' - WHEN city = 'rome' THEN 'europe-west1' - WHEN city = 'new york' THEN 'us-east1' - WHEN city = 'boston' THEN 'us-east1' - WHEN city = 'washington dc' THEN 'us-east1' - WHEN city = 'san francisco' THEN 'us-west1' - WHEN city = 'seattle' THEN 'us-west1' - WHEN city = 'los angeles' THEN 'us-west1' - END - ) STORED; - ALTER TABLE rides ALTER COLUMN REGION SET NOT NULL; - ALTER TABLE rides SET LOCALITY REGIONAL BY ROW AS "region"; - ~~~ - -- `user_promo_codes` - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER TABLE user_promo_codes ADD COLUMN region crdb_internal_region AS ( - CASE WHEN city = 'amsterdam' THEN 'europe-west1' - WHEN city = 'paris' THEN 'europe-west1' - WHEN city = 'rome' THEN 'europe-west1' - WHEN city = 'new york' THEN 'us-east1' - WHEN city = 'boston' THEN 'us-east1' - WHEN city = 'washington dc' THEN 'us-east1' - WHEN city = 'san francisco' THEN 'us-west1' - WHEN city = 'seattle' THEN 'us-west1' - WHEN city = 'los angeles' THEN 'us-west1' - END - ) STORED; - ALTER TABLE user_promo_codes ALTER COLUMN REGION SET NOT NULL; - ALTER TABLE user_promo_codes SET LOCALITY REGIONAL BY ROW AS "region"; - ~~~ - -- `users` - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER TABLE users ADD COLUMN region crdb_internal_region AS ( - CASE WHEN city = 'amsterdam' THEN 'europe-west1' - WHEN city = 'paris' THEN 'europe-west1' - WHEN city = 'rome' THEN 'europe-west1' - WHEN city = 'new york' THEN 'us-east1' - WHEN city = 'boston' THEN 'us-east1' - WHEN city = 'washington dc' THEN 'us-east1' - WHEN city = 'san francisco' THEN 'us-west1' - WHEN city = 'seattle' THEN 'us-west1' - WHEN city = 'los angeles' THEN 'us-west1' - END - ) STORED; - ALTER TABLE users ALTER COLUMN REGION SET NOT NULL; - ALTER TABLE users SET LOCALITY REGIONAL BY ROW AS "region"; - ~~~ - -- `vehicle_location_histories` - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER TABLE vehicle_location_histories ADD COLUMN region crdb_internal_region AS ( - CASE WHEN city = 'amsterdam' THEN 'europe-west1' - WHEN city = 'paris' THEN 'europe-west1' - WHEN city = 'rome' THEN 'europe-west1' - WHEN city = 'new york' THEN 'us-east1' - WHEN city = 'boston' THEN 'us-east1' - WHEN city = 'washington dc' THEN 'us-east1' - WHEN city = 'san francisco' THEN 'us-west1' - WHEN city = 'seattle' THEN 'us-west1' - WHEN city = 'los angeles' THEN 'us-west1' - END - ) STORED; - ALTER TABLE vehicle_location_histories ALTER COLUMN REGION SET NOT NULL; - ALTER TABLE vehicle_location_histories SET LOCALITY REGIONAL BY ROW AS "region"; - ~~~ - -- `vehicles` - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER TABLE vehicles ADD COLUMN region crdb_internal_region AS ( - CASE WHEN city = 'amsterdam' THEN 'europe-west1' - WHEN city = 'paris' THEN 'europe-west1' - WHEN city = 'rome' THEN 'europe-west1' - WHEN city = 'new york' THEN 'us-east1' - WHEN city = 'boston' THEN 'us-east1' - WHEN city = 'washington dc' THEN 'us-east1' - WHEN city = 'san francisco' THEN 'us-west1' - WHEN city = 'seattle' THEN 'us-west1' - WHEN city = 'los angeles' THEN 'us-west1' - END - ) STORED; - ALTER TABLE vehicles ALTER COLUMN REGION SET NOT NULL; - ALTER TABLE vehicles SET LOCALITY REGIONAL BY ROW AS "region"; - ~~~ diff --git a/src/current/_includes/v22.1/sql/physical-plan-url.md b/src/current/_includes/v22.1/sql/physical-plan-url.md deleted file mode 100644 index 0e9109a8586..00000000000 --- a/src/current/_includes/v22.1/sql/physical-plan-url.md +++ /dev/null @@ -1 +0,0 @@ -The generated physical statement plan is encoded into a byte string after the [fragment identifier (`#`)](https://en.wikipedia.org/wiki/Fragment_identifier) in the generated URL. The fragment is not sent to the web server; instead, the browser waits for the web server to return a `decode.html` resource, and then JavaScript on the web page decodes the fragment into a physical statement plan diagram. The statement plan is, therefore, not logged by a server external to the CockroachDB cluster and not exposed to the public internet. diff --git a/src/current/_includes/v22.1/sql/preloaded-databases.md b/src/current/_includes/v22.1/sql/preloaded-databases.md deleted file mode 100644 index 3f1478c9b38..00000000000 --- a/src/current/_includes/v22.1/sql/preloaded-databases.md +++ /dev/null @@ -1,13 +0,0 @@ -New clusters and existing clusters [upgraded](upgrade-cockroach-version.html) to {{ page.version.version }} or later will include auto-generated databases, with the following purposes: - -- The empty `defaultdb` database is used if a client does not specify a database in the [connection parameters](connection-parameters.html). -- The `movr` database contains data about users, vehicles, and rides for the vehicle-sharing app [MovR](movr.html). -- The empty `postgres` database is provided for compatibility with PostgreSQL client applications that require it. -- The `startrek` database contains quotes from episodes. -- The `system` database contains CockroachDB metadata and is read-only. - -All databases except for the `system` database can be [deleted](drop-database.html) if they are not needed. - -{{site.data.alerts.callout_danger}} -Do not query the `system` database directly. Instead, use objects within the [system catalogs](system-catalogs.html). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/sql/privileges.md b/src/current/_includes/v22.1/sql/privileges.md deleted file mode 100644 index a98d194286d..00000000000 --- a/src/current/_includes/v22.1/sql/privileges.md +++ /dev/null @@ -1,13 +0,0 @@ -Privilege | Levels -----------|------------ -`ALL` | Database, Schema, Table, Type -`CREATE` | Database, Schema, Table -`DROP` | Database, Table -`GRANT` | Database, Schema, Table, Type -`CONNECT` | Database -`SELECT` | Table -`INSERT` | Table -`DELETE` | Table -`UPDATE` | Table -`USAGE` | Schema, Type -`ZONECONFIG` | Database, Table diff --git a/src/current/_includes/v22.1/sql/querying-partitions.md b/src/current/_includes/v22.1/sql/querying-partitions.md deleted file mode 100644 index bb2b9d6f09a..00000000000 --- a/src/current/_includes/v22.1/sql/querying-partitions.md +++ /dev/null @@ -1,163 +0,0 @@ -## Query partitions - -Similar to [indexes](indexes.html), partitions can improve query performance by limiting the numbers of rows that a query must scan. In the case of [geo-partitioned data](regional-tables.html), partitioning can limit a query scan to data in a specific region. - -### Filter on an indexed column - -If you filter the query of a partitioned table on a [column in the index directly following the partition prefix](indexes.html), the [cost-based optimizer](cost-based-optimizer.html) creates a query plan that scans each partition in parallel, rather than performing a costly sequential scan of the entire table. - -For example, suppose that the tables in the [`movr`](movr.html) database are geo-partitioned by region, and you want to query the `users` table for information about a specific user. - -Here is the `CREATE TABLE` statement for the `users` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE users; -~~~ - -~~~ - table_name | create_statement -+------------+-------------------------------------------------------------------------------------+ - users | CREATE TABLE users ( - | id UUID NOT NULL, - | city VARCHAR NOT NULL, - | name VARCHAR NULL, - | address VARCHAR NULL, - | credit_card VARCHAR NULL, - | CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - | FAMILY "primary" (id, city, name, address, credit_card) - | ) PARTITION BY LIST (city) ( - | PARTITION us_west VALUES IN (('seattle'), ('san francisco'), ('los angeles')), - | PARTITION us_east VALUES IN (('new york'), ('boston'), ('washington dc')), - | PARTITION europe_west VALUES IN (('amsterdam'), ('paris'), ('rome')) - | ); - | ALTER PARTITION europe_west OF INDEX movr.public.users@primary CONFIGURE ZONE USING - | constraints = '[+region=europe-west1]'; - | ALTER PARTITION us_east OF INDEX movr.public.users@primary CONFIGURE ZONE USING - | constraints = '[+region=us-east1]'; - | ALTER PARTITION us_west OF INDEX movr.public.users@primary CONFIGURE ZONE USING - | constraints = '[+region=us-west1]' -(1 row) -~~~ - -If you know the user's id, you can filter on the `id` column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users WHERE id='00000000-0000-4000-8000-000000000000'; -~~~ - -~~~ - id | city | name | address | credit_card -+--------------------------------------+----------+---------------+----------------------+-------------+ - 00000000-0000-4000-8000-000000000000 | new york | Robert Murphy | 99176 Anderson Mills | 8885705228 -(1 row) -~~~ - -An [`EXPLAIN`](explain.html) statement shows more detail about the cost-based optimizer's plan: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM users WHERE id='00000000-0000-4000-8000-000000000000'; -~~~ - -~~~ - tree | field | description -+------+-------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | distributed | true - | vectorized | false - scan | | - | table | users@primary - | spans | -/"amsterdam" /"amsterdam"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"amsterdam"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"amsterdam\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"boston" /"boston"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"boston"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"boston\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"los angeles" /"los angeles"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"los angeles"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"los angeles\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"new york" /"new york"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"new york"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"new york\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"paris" /"paris"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"paris"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"paris\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"rome" /"rome"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"rome"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"rome\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"san francisco" /"san francisco"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"san francisco"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"san francisco\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"seattle" /"seattle"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"seattle"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"seattle\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"washington dc" /"washington dc"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"washington dc"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"washington dc\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"- - | filter | id = '00000000-0000-4000-8000-000000000000' -(6 rows) -~~~ - -Because the `id` column is in the primary index, directly after the partition prefix (`city`), the optimal query is constrained by the partitioned values. This means the query scans each partition in parallel for the unique `id` value. - -If you know the set of all possible partitioned values, adding a check constraint to the table's create statement can also improve performance. For example: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE users ADD CONSTRAINT check_city CHECK (city IN ('amsterdam','boston','los angeles','new york','paris','rome','san francisco','seattle','washington dc')); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM users WHERE id='00000000-0000-4000-8000-000000000000'; -~~~ - -~~~ - tree | field | description -+------+-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | distributed | false - | vectorized | false - scan | | - | table | users@primary - | spans | /"amsterdam"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"amsterdam"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"boston"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"boston"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"los angeles"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"los angeles"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"new york"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"new york"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"paris"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"paris"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"rome"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"rome"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"san francisco"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"san francisco"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"seattle"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"seattle"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"washington dc"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"washington dc"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# - | parallel | -(6 rows) -~~~ - - -To see the performance improvement over a query that performs a full table scan, compare these queries to a query with a filter on a column that is not in the index. - -### Filter on a non-indexed column - -Suppose that you want to query the `users` table for information about a specific user, but you only know the user's name. - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users WHERE name='Robert Murphy'; -~~~ - -~~~ - id | city | name | address | credit_card -+--------------------------------------+----------+---------------+----------------------+-------------+ - 00000000-0000-4000-8000-000000000000 | new york | Robert Murphy | 99176 Anderson Mills | 8885705228 -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM users WHERE name='Robert Murphy'; -~~~ - -~~~ - tree | field | description -+------+-------------+------------------------+ - | distributed | true - | vectorized | false - scan | | - | table | users@primary - | spans | ALL - | filter | name = 'Robert Murphy' -(6 rows) -~~~ - -The query returns the same result, but because `name` is not an indexed column, the query performs a full table scan that spans across all partition values. - -### Filter on a partitioned column - -If you know which partition contains the data that you are querying, using a filter (e.g., a [`WHERE` clause](select-clause.html#filter-rows)) on the column that is used for the partition can further improve performance by limiting the scan to the specific partition(s) that contain the data that you are querying. - -Now suppose that you know the user's name and location. You can query the table with a filter on the user's name and city: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM users WHERE name='Robert Murphy' AND city='new york'; -~~~ - -~~~ - tree | field | description -+------+-------------+-----------------------------------+ - | distributed | true - | vectorized | false - scan | | - | table | users@primary - | spans | /"new york"-/"new york"/PrefixEnd - | filter | name = 'Robert Murphy' -(6 rows) -~~~ - -The table returns the same results as before, but at a much lower cost, as the query scan now spans just the `new york` partition value. diff --git a/src/current/_includes/v22.1/sql/regional-by-row-table-description.md b/src/current/_includes/v22.1/sql/regional-by-row-table-description.md deleted file mode 100644 index 9c083d478f6..00000000000 --- a/src/current/_includes/v22.1/sql/regional-by-row-table-description.md +++ /dev/null @@ -1,7 +0,0 @@ -In a _regional by row_ table, individual rows are optimized for access from different regions. This setting automatically divides a table and all of [its indexes](multiregion-overview.html#indexes-on-regional-by-row-tables) into [partitions](partitioning.html), with each partition optimized for access from a different region. Like [regional tables](multiregion-overview.html#regional-tables), _regional by row_ tables are optimized for access from a single region. However, that region is specified at the row level instead of applying to the whole table. - -Use regional by row tables when your application requires low-latency reads and writes at a row level where individual rows are primarily accessed from a single region. For example, a users table in a global application may need to keep some users' data in specific regions for better performance. - -For an example of a table that can benefit from the _regional by row_ setting in a multi-region deployment, see the `users` table from the [MovR application](movr.html). - -For instructions showing how to set a table's locality to `REGIONAL BY ROW`, see [`ALTER TABLE ... SET LOCALITY`](set-locality.html#regional-by-row). diff --git a/src/current/_includes/v22.1/sql/regional-table-description.md b/src/current/_includes/v22.1/sql/regional-table-description.md deleted file mode 100644 index c535391692c..00000000000 --- a/src/current/_includes/v22.1/sql/regional-table-description.md +++ /dev/null @@ -1,5 +0,0 @@ -In a _regional_ table, access to the table will be fast in the table's "home region" and slower in other regions. In other words, CockroachDB optimizes access to data in a regional table from a single region. By default, a regional table's home region is the [database's primary region](multiregion-overview.html#database-regions), but that can be changed to use any region in the database. Regional tables work well when your application requires low-latency reads and writes for an entire table from a single region. - -For instructions showing how to set a table's locality to `REGIONAL BY TABLE`, see [`ALTER TABLE ... SET LOCALITY`](set-locality.html#regional-by-table). - -By default, all tables in a multi-region database are _regional_ tables that use the database's primary region. Unless you know your application needs different performance characteristics than regional tables provide, there is no need to change this setting. diff --git a/src/current/_includes/v22.1/sql/replication-zone-patterns-to-multiregion-sql-mapping.md b/src/current/_includes/v22.1/sql/replication-zone-patterns-to-multiregion-sql-mapping.md deleted file mode 100644 index 4aa36cf2dec..00000000000 --- a/src/current/_includes/v22.1/sql/replication-zone-patterns-to-multiregion-sql-mapping.md +++ /dev/null @@ -1,5 +0,0 @@ -| Replication Zone Pattern | Multi-Region SQL | -|--------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------| -| [Duplicate indexes](../v20.2/topology-duplicate-indexes.html) | [`GLOBAL` tables](global-tables.html) | -| [Geo-partitioned replicas](../v20.2/topology-geo-partitioned-replicas.html) | [`REGIONAL BY ROW` tables](regional-tables.html#regional-by-row-tables) with [`ZONE` survival goals](multiregion-overview.html#surviving-zone-failures) | -| [Geo-partitioned leaseholders](../v20.2/topology-geo-partitioned-leaseholders.html) | [`REGIONAL BY ROW` tables](regional-tables.html#regional-by-row-tables) with [`REGION` survival goals](multiregion-overview.html#surviving-region-failures) | diff --git a/src/current/_includes/v22.1/sql/retry-savepoints.md b/src/current/_includes/v22.1/sql/retry-savepoints.md deleted file mode 100644 index 6b9e78209f0..00000000000 --- a/src/current/_includes/v22.1/sql/retry-savepoints.md +++ /dev/null @@ -1 +0,0 @@ -A savepoint defined with the name `cockroach_restart` is a "retry savepoint" and is used to implement [advanced client-side transaction retries](advanced-client-side-transaction-retries.html). For more information, see [Retry savepoints](advanced-client-side-transaction-retries.html#retry-savepoints). diff --git a/src/current/_includes/v22.1/sql/row-level-ttl.md b/src/current/_includes/v22.1/sql/row-level-ttl.md deleted file mode 100644 index 52db4fd5453..00000000000 --- a/src/current/_includes/v22.1/sql/row-level-ttl.md +++ /dev/null @@ -1 +0,0 @@ -{% include_cached new-in.html version="v22.1" %} CockroachDB has preview support for Time to Live ("TTL") expiration on table rows, also known as _Row-Level TTL_. Row-Level TTL is a mechanism whereby rows from a table are considered "expired" and can be automatically deleted once those rows have been stored longer than a specified expiration time. diff --git a/src/current/_includes/v22.1/sql/savepoint-ddl-rollbacks.md b/src/current/_includes/v22.1/sql/savepoint-ddl-rollbacks.md deleted file mode 100644 index 57da82ae775..00000000000 --- a/src/current/_includes/v22.1/sql/savepoint-ddl-rollbacks.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -Rollbacks to savepoints over [DDL](https://en.wikipedia.org/wiki/Data_definition_language) statements are only supported if you're rolling back to a savepoint created at the beginning of the transaction. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/sql/savepoints-and-high-priority-transactions.md b/src/current/_includes/v22.1/sql/savepoints-and-high-priority-transactions.md deleted file mode 100644 index 4b77f2dd561..00000000000 --- a/src/current/_includes/v22.1/sql/savepoints-and-high-priority-transactions.md +++ /dev/null @@ -1 +0,0 @@ -[`ROLLBACK TO SAVEPOINT`](rollback-transaction.html#rollback-a-nested-transaction) (for either regular savepoints or "restart savepoints" defined with `cockroach_restart`) causes a "feature not supported" error after a DDL statement in a [`HIGH PRIORITY` transaction](transactions.html#transaction-priorities), in order to avoid a transaction deadlock. For more information, see GitHub issue [#46414](https://www.github.com/cockroachdb/cockroach/issues/46414). diff --git a/src/current/_includes/v22.1/sql/savepoints-and-row-locks.md b/src/current/_includes/v22.1/sql/savepoints-and-row-locks.md deleted file mode 100644 index 0468c12fc4e..00000000000 --- a/src/current/_includes/v22.1/sql/savepoints-and-row-locks.md +++ /dev/null @@ -1,12 +0,0 @@ -CockroachDB supports exclusive row locks. - -- In PostgreSQL, row locks are released/cancelled upon [`ROLLBACK TO SAVEPOINT`][rts]. -- In CockroachDB, row locks are preserved upon [`ROLLBACK TO SAVEPOINT`][rts]. - -This is an architectural difference that may or may not be lifted in a later CockroachDB version. - -The code of client applications that rely on row locks must be reviewed and possibly modified to account for this difference. In particular, if an application is relying on [`ROLLBACK TO SAVEPOINT`][rts] to release row locks and allow a concurrent transaction touching the same rows to proceed, this behavior will not work with CockroachDB. - - - -[rts]: rollback-transaction.html diff --git a/src/current/_includes/v22.1/sql/schema-changes.md b/src/current/_includes/v22.1/sql/schema-changes.md deleted file mode 100644 index 04c49c2fbd2..00000000000 --- a/src/current/_includes/v22.1/sql/schema-changes.md +++ /dev/null @@ -1 +0,0 @@ -- Schema changes through [`ALTER TABLE`](alter-table.html), [`DROP DATABASE`](drop-database.html), [`DROP TABLE`](drop-table.html), and [`TRUNCATE`](truncate.html) \ No newline at end of file diff --git a/src/current/_includes/v22.1/sql/schema-terms.md b/src/current/_includes/v22.1/sql/schema-terms.md deleted file mode 100644 index d66ebd4058d..00000000000 --- a/src/current/_includes/v22.1/sql/schema-terms.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -To avoid confusion with the general term "[schema](https://en.wiktionary.org/wiki/schema)", in this guide we refer to the logical object as a *user-defined schema*, and to the relationship structure of logical objects in a cluster as a *database schema*. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v22.1/sql/select-for-update-example-partial.md b/src/current/_includes/v22.1/sql/select-for-update-example-partial.md deleted file mode 100644 index 01d4c196953..00000000000 --- a/src/current/_includes/v22.1/sql/select-for-update-example-partial.md +++ /dev/null @@ -1,50 +0,0 @@ -This example assumes you are running a [local unsecured cluster](start-a-local-cluster.html). - -First, connect to the running cluster (call this Terminal 1): - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach sql --insecure -~~~ - -Next, create a table and insert some rows: - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE kv (k INT PRIMARY KEY, v INT); -INSERT INTO kv (k, v) VALUES (1, 5), (2, 10), (3, 15); -~~~ - -Next, we'll start a [transaction](transactions.html) and lock the row we want to operate on: - -{% include_cached copy-clipboard.html %} -~~~ sql -BEGIN; -SELECT * FROM kv WHERE k = 1 FOR UPDATE; -~~~ - -Press **Enter** twice in the [SQL client](cockroach-sql.html) to send the statements to be evaluated. This will result in the following output: - -~~~ - k | v -+---+----+ - 1 | 5 -(1 row) -~~~ - -Now open another terminal and connect to the database from a second client (call this Terminal 2): - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach sql --insecure -~~~ - -From Terminal 2, start a transaction and try to lock the same row for updates that is already being accessed by the transaction we opened in Terminal 1: - -{% include_cached copy-clipboard.html %} -~~~ sql -BEGIN; -SELECT * FROM kv WHERE k = 1 FOR UPDATE; -~~~ - -Press **Enter** twice to send the statements to be evaluated. Because Terminal 1 has already locked this row, the `SELECT FOR UPDATE` statement from Terminal 2 will appear to "wait". diff --git a/src/current/_includes/v22.1/sql/select-for-update-limitations.md b/src/current/_includes/v22.1/sql/select-for-update-limitations.md deleted file mode 100644 index 3604aa75d1f..00000000000 --- a/src/current/_includes/v22.1/sql/select-for-update-limitations.md +++ /dev/null @@ -1,10 +0,0 @@ -Locks acquired using {% if page.name == "select-for-update.md" %} `SELECT ... FOR UPDATE` {% else %} [`SELECT ... FOR UPDATE`](select-for-update.html) {% endif %} are dropped on [lease transfers](architecture/replication-layer.html#epoch-based-leases-table-data) and [range splits and merges](architecture/distribution-layer.html#range-merges). `SELECT ... FOR UPDATE` locks should be thought of as best-effort, and should not be relied upon for correctness, as they are implemented as fast, in-memory [unreplicated locks](architecture/transaction-layer.html#unreplicated-locks). - -If a lease transfer or range split/merge occurs on a range held by an unreplicated lock, the lock is dropped, and the following behaviors can occur: - -- The desired ordering of concurrent accesses to one or more rows of a table expressed by your use of `SELECT ... FOR UPDATE` may not be preserved (that is, a transaction _B_ against some table _T_ that was supposed to wait behind another transaction _A_ operating on _T_ may not wait for transaction _A_). -- The transaction that acquired the (now dropped) unreplicated lock may fail to commit, leading to [transaction retry errors with code `40001` and the `restart transaction` error message](common-errors.html#restart-transaction). - -We intend to improve the reliability of these locks. For details, see [cockroachdb/cockroach#75456](https://github.com/cockroachdb/cockroach/issues/75456). - -Note that [serializable isolation](transactions.html#serializable-isolation) is preserved despite this limitation. diff --git a/src/current/_includes/v22.1/sql/select-for-update-overview.md b/src/current/_includes/v22.1/sql/select-for-update-overview.md deleted file mode 100644 index c2ef4b44c4e..00000000000 --- a/src/current/_includes/v22.1/sql/select-for-update-overview.md +++ /dev/null @@ -1,20 +0,0 @@ -The `SELECT FOR UPDATE` statement is used to order transactions by controlling concurrent access to one or more rows of a table. - -It works by locking the rows returned by a [selection query][selection], such that other transactions trying to access those rows are forced to wait for the transaction that locked the rows to finish. These other transactions are effectively put into a queue based on when they tried to read the value of the locked rows. - -Because this queueing happens during the read operation, the [thrashing](https://en.wikipedia.org/wiki/Thrashing_(computer_science)) that would otherwise occur if multiple concurrently executing transactions attempt to `SELECT` the same data and then `UPDATE` the results of that selection is prevented. By preventing thrashing, CockroachDB also prevents [transaction retries][retries] that would otherwise occur due to [contention](performance-best-practices-overview.html#transaction-contention). - -As a result, using `SELECT FOR UPDATE` leads to increased throughput and decreased tail latency for contended operations. - -Note that using `SELECT FOR UPDATE` does not completely eliminate the chance of [serialization errors](transaction-retry-error-reference.html), which use the `SQLSTATE` error code `40001`, and emit error messages with the string `restart transaction`. These errors can also arise due to [time uncertainty](architecture/transaction-layer.html#transaction-conflicts). To eliminate the need for application-level retry logic, in addition to `SELECT FOR UPDATE` your application also needs to use a [driver that implements automatic retry handling](transactions.html#client-side-intervention). - -CockroachDB does not support the `FOR SHARE` or `FOR KEY SHARE` [locking strengths](select-for-update.html#locking-strengths), or the `SKIP LOCKED` [wait policy](select-for-update.html#wait-policies). - -{{site.data.alerts.callout_info}} -By default, CockroachDB uses the `SELECT FOR UPDATE` locking mechanism during the initial row scan performed in [`UPDATE`](update.html) and [`UPSERT`](upsert.html) statement execution. To turn off implicit `SELECT FOR UPDATE` locking for `UPDATE` and `UPSERT` statements, set the `enable_implicit_select_for_update` [session variable](set-vars.html) to `false`. -{{site.data.alerts.end}} - - - -[retries]: transactions.html#transaction-retries -[selection]: selection-queries.html diff --git a/src/current/_includes/v22.1/sql/server-side-connection-limit.md b/src/current/_includes/v22.1/sql/server-side-connection-limit.md deleted file mode 100644 index 10b4ab67260..00000000000 --- a/src/current/_includes/v22.1/sql/server-side-connection-limit.md +++ /dev/null @@ -1 +0,0 @@ -{% include_cached new-in.html version="v22.1" %} To control the maximum number of non-superuser ([`root`](security-reference/authorization.html#root-user) user or other [`admin` role](security-reference/authorization.html#admin-role)) connections a [gateway node](architecture/sql-layer.html#gateway-node) can have open at one time, use the `server.max_connections_per_gateway` [cluster setting](cluster-settings.html). If a new non-superuser connection would exceed this limit, the error message `"sorry, too many clients already"` is returned, along with error code `53300`. diff --git a/src/current/_includes/v22.1/sql/set-transaction-as-of-system-time-example.md b/src/current/_includes/v22.1/sql/set-transaction-as-of-system-time-example.md deleted file mode 100644 index 8e758f1c303..00000000000 --- a/src/current/_includes/v22.1/sql/set-transaction-as-of-system-time-example.md +++ /dev/null @@ -1,24 +0,0 @@ -{% include_cached copy-clipboard.html %} -~~~ sql -> BEGIN; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SET TRANSACTION AS OF SYSTEM TIME '2019-04-09 18:02:52.0+00:00'; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM products; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> COMMIT; -~~~ diff --git a/src/current/_includes/v22.1/sql/shell-commands.md b/src/current/_includes/v22.1/sql/shell-commands.md deleted file mode 100644 index 3916d738cea..00000000000 --- a/src/current/_includes/v22.1/sql/shell-commands.md +++ /dev/null @@ -1,24 +0,0 @@ -The following commands can be used within the interactive SQL shell: - -Command | Usage ---------|------------ -`\?`
`help` | View this help within the shell. -`\q`
`quit`
`exit`
**Ctrl+D** | Exit the shell.
When no text follows the prompt, **Ctrl+C** exits the shell as well; otherwise, **Ctrl+C** clears the line. -`\!` | Run an external command and print its results to `stdout`. [See an example](cockroach-sql.html#run-external-commands-from-the-sql-shell). -\| | Run the output of an external command as SQL statements. [See an example](cockroach-sql.html#run-external-commands-from-the-sql-shell). -`\set
` | Show details about columns in the specified table. This command is equivalent to [`SHOW COLUMNS`](show-columns.html). -`\r` | Resets the query input buffer, clearing all SQL statements that have been entered but not yet executed. -`\statement-diag list` | List available [diagnostic bundles](cockroach-statement-diag.html). -`\statement-diag download []` | Download diagnostic bundle. -`\i ` | Reads and executes input from the file ``, in the current working directory. -`\ir ` | Reads and executes input from the file ``.
When invoked in the interactive shell, `\i` and `\ir` behave identically (i.e., CockroachDB looks for `` in the current working directory). When invoked from a script, CockroachDB looks for `` relative to the directory in which the script is located. -`\echo ` | Evaluate the `` and print the results to the standard output. -`\x ` | When `true`/`on`/`yes`/`1`, [sets the display format](cockroach-sql.html#sql-flag-format) to `records`. When `false`/`off`/`no`/`0`, sets the session's format to the default (`table`/`tsv`). diff --git a/src/current/_includes/v22.1/sql/shell-help.md b/src/current/_includes/v22.1/sql/shell-help.md deleted file mode 100644 index 627ad837132..00000000000 --- a/src/current/_includes/v22.1/sql/shell-help.md +++ /dev/null @@ -1,45 +0,0 @@ -Within the SQL shell, you can get interactive help about statements and functions: - -Command | Usage ---------|------ -`\h`

`??` | List all available SQL statements, by category. -`\hf` | List all available SQL functions, in alphabetical order. -`\h `

` ?` | View help for a specific SQL statement. -`\hf `

` ?` | View help for a specific SQL function. - -#### Examples - -~~~ sql -> \h UPDATE -~~~ - -~~~ -Command: UPDATE -Description: update rows of a table -Category: data manipulation -Syntax: -UPDATE [[AS] ] SET ... [WHERE ] [RETURNING ] - -See also: - SHOW TABLES - INSERT - UPSERT - DELETE - https://www.cockroachlabs.com/docs/v2.1/update.html -~~~ - -~~~ sql -> \hf uuid_v4 -~~~ - -~~~ -Function: uuid_v4 -Category: built-in functions -Returns a UUID. - -Signature Category -uuid_v4() -> bytes [ID Generation] - -See also: - https://www.cockroachlabs.com/docs/v2.1/functions-and-operators.html -~~~ diff --git a/src/current/_includes/v22.1/sql/shell-options.md b/src/current/_includes/v22.1/sql/shell-options.md deleted file mode 100644 index 832b31fce23..00000000000 --- a/src/current/_includes/v22.1/sql/shell-options.md +++ /dev/null @@ -1,15 +0,0 @@ -- To view option descriptions and how they are currently set, use `\set` without any options. -- To enable or disable an option, use `\set
- - - - -
rowchickturtle
1🐥🐢
1 row
-~~~ - -When piping output to another command or a file, `--format` defaults to `tsv`: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach-sql --insecure \ ---execute="SELECT '🐥' AS chick, '🐢' AS turtle" > out.txt \ ---user=maxroach \ ---host=12.345.67.89 \ ---database=critterdb -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cat out.txt -~~~ - -~~~ -1 row -chick turtle -🐥 🐢 -~~~ - -However, you can explicitly set `--format` to another format (e.g., `table`): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach-sql --insecure \ ---format=table \ ---execute="SELECT '🐥' AS chick, '🐢' AS turtle" > out.txt \ ---user=maxroach \ ---host=12.345.67.89 \ ---database=critterdb -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cat out.txt -~~~ - -~~~ - chick | turtle ---------+--------- - 🐥 | 🐢 -(1 row) -~~~ - -### Show borders around the statement output within the SQL shell - -To display outside and inside borders in the statement output, set the `border` [SQL shell option](#client-side-options) to `3`. - -{% include_cached copy-clipboard.html %} -~~~ sql -\set border=3 -SELECT * FROM animals; -~~~ - -~~~ -+--------------------+----------+ -| id | name | -+--------------------+----------+ -| 710907071259213825 | bobcat | -+--------------------+----------+ -| 710907071259279361 | 🐢 | -+--------------------+----------+ -| 710907071259312129 | barn owl | -+--------------------+----------+ -~~~ - -### Make the output of `SHOW` statements selectable - -To make it possible to select from the output of `SHOW` statements, set `--format` to `raw`: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach-sql --insecure \ ---format=raw \ ---user=maxroach \ ---host=12.345.67.89 \ ---database=critterdb -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE customers; -~~~ - -~~~ -# 2 columns -# row 1 -## 14 -test.customers -## 185 -CREATE TABLE customers ( - id INT NOT NULL, - email STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (id ASC), - UNIQUE INDEX customers_email_key (email ASC), - FAMILY "primary" (id, email) -) -# 1 row -~~~ - -When `--format` is not set to `raw`, you can use the `display_format` [SQL shell option](#client-side-options) to change the output format within the interactive session: - -{% include_cached copy-clipboard.html %} -~~~ sql -> \set display_format raw -~~~ - -~~~ -# 2 columns -# row 1 -## 14 -test.customers -## 185 -CREATE TABLE customers ( - id INT NOT NULL, - email STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (id ASC), - UNIQUE INDEX customers_email_key (email ASC), - FAMILY "primary" (id, email) -) -# 1 row -~~~ - -### Execute SQL statements from a file - -In this example, we show and then execute the contents of a file containing SQL statements. - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cat statements.sql -~~~ - -~~~ -CREATE TABLE roaches (name STRING, country STRING); -INSERT INTO roaches VALUES ('American Cockroach', 'United States'), ('Brownbanded Cockroach', 'United States'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach-sql --insecure \ ---user=maxroach \ ---host=12.345.67.89 \ ---database=critterdb \ --f statements.sql -~~~ - -~~~ -CREATE TABLE -INSERT 2 -~~~ - -### Run external commands from the SQL shell - -In this example, we use `\!` to look at the rows in a CSV file before creating a table and then using `\|` to insert those rows into the table. - -{{site.data.alerts.callout_info}}This example works only if the values in the CSV file are numbers. For values in other formats, use an online CSV-to-SQL converter or make your own import program.{{site.data.alerts.end}} - -{% include_cached copy-clipboard.html %} -~~~ sql -> \! cat test.csv -~~~ - -~~~ -12, 13, 14 -10, 20, 30 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE csv (x INT, y INT, z INT); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> \| IFS=","; while read a b c; do echo "insert into csv values ($a, $b, $c);"; done < test.csv; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM csv; -~~~ - -~~~ - x | y | z ------+----+----- - 12 | 13 | 14 - 10 | 20 | 30 -~~~ - -In this example, we create a table and then use `\|` to programmatically insert values. - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE for_loop (x INT); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> \| for ((i=0;i<10;++i)); do echo "INSERT INTO for_loop VALUES ($i);"; done -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM for_loop; -~~~ - -~~~ - x ------ - 0 - 1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -~~~ - -### Allow potentially unsafe SQL statements - -The `--safe-updates` flag defaults to `true`. This prevents SQL statements that may have broad, undesired side effects. For example, by default, we cannot use `DELETE` without a `WHERE` clause to delete all rows from a table: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach-sql --insecure --execute="SELECT * FROM db1.t1" -~~~ - -~~~ - id | name ------+------- - 1 | a - 2 | b - 3 | c - 4 | d - 5 | e - 6 | f - 7 | g - 8 | h - 9 | i - 10 | j ------+------- -(10 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach-sql --insecure --execute="DELETE FROM db1.t1" -~~~ - -~~~ -Error: pq: rejected: DELETE without WHERE clause (sql_safe_updates = true) -Failed running "sql" -~~~ - -However, to allow an "unsafe" statement, you can set `--safe-updates=false`: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach-sql --insecure --safe-updates=false --execute="DELETE FROM db1.t1" -~~~ - -~~~ -DELETE 10 -~~~ - -{{site.data.alerts.callout_info}} -Potentially unsafe SQL statements can also be allowed/disallowed for an entire session via the `sql_safe_updates` [session variable](set-vars.html). -{{site.data.alerts.end}} - -### Reveal the SQL statements sent implicitly by the command-line utility - -In this example, we use the `--execute` flag to execute statements from the command line and the `--echo-sql` flag to reveal SQL statements sent implicitly: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach-sql --insecure \ ---execute="CREATE TABLE t1 (id INT PRIMARY KEY, name STRING)" \ ---execute="INSERT INTO t1 VALUES (1, 'a'), (2, 'b'), (3, 'c')" \ ---user=maxroach \ ---host=12.345.67.89 \ ---database=db1 ---echo-sql -~~~ - -~~~ -# Server version: CockroachDB CCL f8f3c9317 (darwin amd64, built 2017/09/13 15:05:35, go1.8) (same version as client) -# Cluster ID: 847a4ba5-c78a-465a-b1a0-59fae3aab520 -> SET sql_safe_updates = TRUE -> CREATE TABLE t1 (id INT PRIMARY KEY, name STRING) -CREATE TABLE -> INSERT INTO t1 VALUES (1, 'a'), (2, 'b'), (3, 'c') -INSERT 3 -~~~ - -In this example, we start the interactive SQL shell and enable the `echo` shell option to reveal SQL statements sent implicitly: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach-sql --insecure \ ---user=maxroach \ ---host=12.345.67.89 \ ---database=db1 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> \set echo -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO db1.t1 VALUES (4, 'd'), (5, 'e'), (6, 'f'); -~~~ - -~~~ -> INSERT INTO db1.t1 VALUES (4, 'd'), (5, 'e'), (6, 'f'); -INSERT 3 - -Time: 2.426534ms - -> SHOW TRANSACTION STATUS -> SHOW DATABASE -~~~ - -### Repeat a SQL statement - -Repeating SQL queries on a table can be useful for monitoring purposes. With the `--watch` flag, you can repeat the statements specified with a `--execute` or `-e` flag periodically, until a SQL error occurs or the process is terminated. - -For example, if you want to monitor the number of queries running on the current node, you can use `cockroach-sql` with the `--watch` flag to query the node's `crdb_internal.node_statement_statistics` table for the query count: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach-sql --insecure \ ---execute="SELECT SUM(count) FROM crdb_internal.node_statement_statistics" \ ---watch 1m -~~~ - -~~~ - sum -------- - 926 -(1 row) - sum --------- - 4227 -(1 row) -^C -~~~ - -In this example, the statement is executed every minute. We let the process run for a couple minutes before terminating it with **Ctrl+C**. - -### Connect to a cluster listening for Unix domain socket connections - -To connect to a cluster that is running on the same machine as your client and is listening for [Unix domain socket](https://en.wikipedia.org/wiki/Unix_domain_socket) connections, [specify a Unix domain socket URI](connection-parameters.html#example-uri-for-a-unix-domain-socket) with the `--url` connection parameter. - -For example, suppose you start a single-node cluster with the following [`cockroach start-single-node`](cockroach-start-single-node.html) command: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start-single-node --insecure --socket-dir=/tmp -~~~ - -~~~ -CockroachDB node starting at {{ now | date: "%Y-%m-%d %H:%M:%S.%6 +0000 UTC" }} (took 1.3s) -build: CCL {{page.release_info.version}} @ {{page.release_info.build_time}} -webui: http://Jesses-MBP-2:8080 -sql: postgresql://root@Jesses-MBP-2:26257?sslmode=disable -RPC client flags: ./cockroach --host=Jesses-MBP-2:26257 --insecure -socket: /tmp/.s.PGSQL.26257 -logs: /Users/jesseseldess/Downloads/cockroach-{{ page.release-info.version }}.darwin-10.9-amd64/cockroach-data/logs -temp dir: /Users/jesseseldess/Downloads/cockroach-{{ page.release-info.version }}.darwin-10.9-amd64/cockroach-data/cockroach-temp805054895 -external I/O path: /Users/jesseseldess/Downloads/cockroach-{{ page.release-info.version }}.darwin-10.9-amd64/cockroach-data/extern -store[0]: path=/Users/jesseseldess/Downloads/cockroach-{{ page.release-info.version }}.darwin-10.9-amd64/cockroach-data -storage engine: pebble -status: initialized new cluster -clusterID: 455ad71d-21d4-424a-87ad-8097b6b5b99f -nodeID: 1 -~~~ - -To connect to this cluster with a socket: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach-sql --url='postgres://root@?host=/tmp&port=26257' -~~~ diff --git a/src/current/_includes/v22.1/sql/start-a-multi-region-demo-cluster.md b/src/current/_includes/v22.1/sql/start-a-multi-region-demo-cluster.md deleted file mode 100644 index 597730029e8..00000000000 --- a/src/current/_includes/v22.1/sql/start-a-multi-region-demo-cluster.md +++ /dev/null @@ -1,41 +0,0 @@ -Use the following [`cockroach demo`](cockroach-demo.html) command to start the cluster. This particular combination of flags results in a demo cluster of 9 nodes, with 3 nodes in each region. It sets the appropriate [node localities](cockroach-start.html#locality) and also simulates the network latency that would occur between nodes in these localities. For more information about each flag, see the [`cockroach demo`](cockroach-demo.html#flags) documentation, especially for [`--global`](cockroach-demo.html#global-flag). - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo --global --nodes 9 --no-example-database --insecure -~~~ - -When the cluster starts, you'll see a message like the one shown below, followed by a SQL prompt. Note the URLs for: - -- Viewing the [DB Console](ui-overview.html): `http://127.0.0.1:8080`. -- Connecting to the database from a [SQL shell](cockroach-sql.html) or a [programming language](connect-to-the-database.html): `postgres://root@127.0.0.1:26257?sslmode=disable`. - -~~~ -# -# Welcome to the CockroachDB demo database! -# -# You are connected to a temporary, in-memory CockroachDB cluster of 9 nodes. -# -# This demo session will attempt to enable enterprise features -# by acquiring a temporary license from Cockroach Labs in the background. -# To disable this behavior, set the environment variable -# COCKROACH_SKIP_ENABLING_DIAGNOSTIC_REPORTING=true. -# -# Reminder: your changes to data stored in the demo session will not be saved! -# -# Connection parameters: -# (webui) http://127.0.0.1:8080/demologin?password=demo76950&username=demo -# (sql) postgres://demo:demo76950@127.0.0.1:26257?sslmode=require -# (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26257 -# -# To display connection parameters for other nodes, use \demo ls. -# -# The user "demo" with password "demo76950" has been created. Use it to access the Web UI! -# -# Server version: CockroachDB CCL v21.1.2 (x86_64-apple-darwin19, built 2021/06/07 18:13:04, go1.15.11) (same version as client) -# Cluster ID: bfd9fc91-69bd-4417-a2f7-66e556bf2cfd -# Organization: Cockroach Demo -# -# Enter \? for a brief introduction. -# -~~~ diff --git a/src/current/_includes/v22.1/sql/super-region-considerations.md b/src/current/_includes/v22.1/sql/super-region-considerations.md deleted file mode 100644 index 4e5cbafed03..00000000000 --- a/src/current/_includes/v22.1/sql/super-region-considerations.md +++ /dev/null @@ -1,7 +0,0 @@ -To use super regions, you must keep the following considerations in mind: - -- Your cluster must be a [multi-region cluster](multiregion-overview.html). -- Super regions [must be enabled](#enable-super-regions). -- Super regions can only contain one or more [database regions](multiregion-overview.html#database-regions) that have already been added with [`ADD REGION`](add-region.html). -- Each database region can only belong to one super region. In other words, given two super regions _A_ and _B_, the set of database regions in _A_ must be [disjoint](https://en.wikipedia.org/wiki/Disjoint_sets) from the set of database regions in _B_. -- You cannot [drop a region](drop-region.html) that is part of a super region until you either [alter the super region](alter-super-region.html) to remove it, or [drop the super region](drop-super-region.html) altogether. diff --git a/src/current/_includes/v22.1/sql/super-regions-for-domiciling-with-region-survivability.md b/src/current/_includes/v22.1/sql/super-regions-for-domiciling-with-region-survivability.md deleted file mode 100644 index f7aefe8abec..00000000000 --- a/src/current/_includes/v22.1/sql/super-regions-for-domiciling-with-region-survivability.md +++ /dev/null @@ -1 +0,0 @@ -If you want to do data domiciling for databases with [region survival goals](multiregion-overview.html#survive-region-failures) {% if page.name == "multiregion-overview.md" %} using the higher-level multi-region abstractions, you must use super regions. {% else %} using the higher-level [multi-region abstractions](multiregion-overview.html), you must use [super regions](multiregion-overview.html#super-regions). {% endif %} Using [`ALTER DATABASE ... PLACEMENT RESTRICTED`](placement-restricted.html) will not work for databases that are set up with region survival goals. diff --git a/src/current/_includes/v22.1/sql/unsupported-postgres-features.md b/src/current/_includes/v22.1/sql/unsupported-postgres-features.md deleted file mode 100644 index 0200c4b3b79..00000000000 --- a/src/current/_includes/v22.1/sql/unsupported-postgres-features.md +++ /dev/null @@ -1,15 +0,0 @@ -- Stored procedures and functions. -- Triggers. -- Events. -- User-defined functions (UDFs). -- `FULLTEXT` functions and indexes. -- Drop primary key. - - {{site.data.alerts.callout_info}} - Each table must have a primary key associated with it. You can [drop and add a primary key constraint within a single transaction](drop-constraint.html#drop-and-add-a-primary-key-constraint). - {{site.data.alerts.end}} -- XML functions. -- Column-level privileges. -- XA syntax. -- Creating a database from a template. -- [Dropping a single partition from a table](partitioning.html#known-limitations). diff --git a/src/current/_includes/v22.1/sql/use-import-into.md b/src/current/_includes/v22.1/sql/use-import-into.md deleted file mode 100644 index a2f3cb47d68..00000000000 --- a/src/current/_includes/v22.1/sql/use-import-into.md +++ /dev/null @@ -1,5 +0,0 @@ -{{site.data.alerts.callout_info}} -As of v21.2 `IMPORT TABLE` will be deprecated. We recommend using [`CREATE TABLE`](create-table.html) followed by [`IMPORT INTO`](import-into.html) to import data into a new table. For an example, read [Import into a new table from a CSV file](import-into.html#import-into-a-new-table-from-a-csv-file). - -To import data into an existing table, use [`IMPORT INTO`](import-into.html). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/sql/use-multiregion-instead-of-partitioning.md b/src/current/_includes/v22.1/sql/use-multiregion-instead-of-partitioning.md deleted file mode 100644 index 961ea1d2e33..00000000000 --- a/src/current/_includes/v22.1/sql/use-multiregion-instead-of-partitioning.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_success}} - Most users should not need to use partitioning directly. Instead, they should use CockroachDB's built-in [multi-region capabilities](multiregion-overview.html), which automatically handle geo-partitioning and other low-level details. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/start-in-docker/mac-linux-steps.md b/src/current/_includes/v22.1/start-in-docker/mac-linux-steps.md deleted file mode 100644 index 660be136337..00000000000 --- a/src/current/_includes/v22.1/start-in-docker/mac-linux-steps.md +++ /dev/null @@ -1,260 +0,0 @@ -## Step 1. Create a bridge network - -Since you'll be running multiple Docker containers on a single host, with one CockroachDB node per container, you need to create what Docker refers to as a [bridge network](https://docs.docker.com/engine/userguide/networking/#/a-bridge-network). The bridge network will enable the containers to communicate as a single cluster while keeping them isolated from external networks. - -{% include_cached copy-clipboard.html %} -~~~ shell -$ docker network create -d bridge roachnet -~~~ - -We've used `roachnet` as the network name here and in subsequent steps, but feel free to give your network any name you like. - -## Step 2. Start the cluster - -1. Create a [Docker volume](https://docs.docker.com/storage/volumes/) for each container: - - {% include_cached copy-clipboard.html %} - ~~~ shell - docker volume create roach1 - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - docker volume create roach2 - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - docker volume create roach3 - ~~~ - - {{site.data.alerts.callout_danger}} - Avoid using the `-v` / `--volume` command to mount a local macOS filesystem into the container. Use Docker volumes or a [`tmpfs` mount](https://docs.docker.com/storage/tmpfs/). - {{site.data.alerts.end}} - -1. Start the first node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ docker run -d \ - --name=roach1 \ - --hostname=roach1 \ - --net=roachnet \ - -p 26257:26257 -p 8080:8080 \ - -v "roach1:/cockroach/cockroach-data" \ - {{page.release_info.docker_image}}:{{page.release_info.version}} start \ - --insecure \ - --join=roach1,roach2,roach3 - ~~~ - -1. This command creates a container and starts the first CockroachDB node inside it. Take a moment to understand each part: - - `docker run`: The Docker command to start a new container. - - `-d`: This flag runs the container in the background so you can continue the next steps in the same shell. - - `--name`: The name for the container. This is optional, but a custom name makes it significantly easier to reference the container in other commands, for example, when opening a Bash session in the container or stopping the container. - - `--hostname`: The hostname for the container. You will use this to join other containers/nodes to the cluster. - - `--net`: The bridge network for the container to join. See step 1 for more details. - - `-p 26257:26257 -p 8080:8080`: These flags map the default port for inter-node and client-node communication (`26257`) and the default port for HTTP requests to the DB Console (`8080`) from the container to the host. This enables inter-container communication and makes it possible to call up the DB Console from a browser. - - `-v "roach1:/cockroach/cockroach-data"`: This flag mounts a host directory as a data volume. This means that data and logs for this node will be stored in the `roach1` volume on the host and will persist after the container is stopped or deleted. For more details, see Docker's [volumes](https://docs.docker.com/storage/volumes/) topic. - - `{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure --join`: The CockroachDB command to [start a node](cockroach-start.html) in the container in insecure mode. The `--join` flag specifies the `hostname` of each node that will initially comprise your cluster. Otherwise, all [`cockroach start`](cockroach-start.html) defaults are accepted. Note that since each node is in a unique container, using identical default ports won’t cause conflicts. - -1. Start two more nodes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ docker run -d \ - --name=roach2 \ - --hostname=roach2 \ - --net=roachnet \ - -v "roach2:/cockroach/cockroach-data" \ - {{page.release_info.docker_image}}:{{page.release_info.version}} start \ - --insecure \ - --join=roach1,roach2,roach3 - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ docker run -d \ - --name=roach3 \ - --hostname=roach3 \ - --net=roachnet \ - -v "roach3:/cockroach/cockroach-data" \ - {{page.release_info.docker_image}}:{{page.release_info.version}} start \ - --insecure \ - --join=roach1,roach2,roach3 - ~~~ - -1. Perform a one-time initialization of the cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ docker exec -it roach1 ./cockroach init --insecure - ~~~ - - You'll see the following message: - - ~~~ - Cluster successfully initialized - ~~~ - - At this point, each node also prints helpful [startup details](cockroach-start.html#standard-output) to its log. For example, the following command retrieves node 1's startup details: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ docker exec -it roach1 grep 'node starting' cockroach-data/logs/cockroach.log -A 11 - ~~~ - - The output will look something like this: - - ~~~ - CockroachDB node starting at {{ now | date: "%Y-%m-%d %H:%M:%S.%6 +0000 UTC" }} - build: CCL {{page.release_info.version}} @ {{page.release_info.build_time}} (go1.12.6) - webui: http://roach1:8080 - sql: postgresql://root@roach1:26257?sslmode=disable - client flags: /cockroach/cockroach --host=roach1:26257 --insecure - logs: /cockroach/cockroach-data/logs - temp dir: /cockroach/cockroach-data/cockroach-temp273641911 - external I/O path: /cockroach/cockroach-data/extern - store[0]: path=/cockroach/cockroach-data - status: initialized new cluster - clusterID: 1a705c26-e337-4b09-95a6-6e5a819f9eec - nodeID: 1 - ~~~ - -## Step 3. Use the built-in SQL client - -Now that your cluster is live, you can use any node as a SQL gateway. To test this out, let's use the `docker exec` command to start the [built-in SQL shell](cockroach-sql.html) in the first container. - -1. Start the SQL shell in the first container: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ docker exec -it roach1 ./cockroach sql --insecure - ~~~ - -2. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE bank; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO bank.accounts VALUES (1, 1000.50); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT * FROM bank.accounts; - ~~~ - - ~~~ - id | balance - +----+---------+ - 1 | 1000.50 - (1 row) - ~~~ - -3. Now exit the SQL shell on node 1 and open a new shell on node 2: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ docker exec -it roach2 ./cockroach sql --insecure - ~~~ - -4. Run the same `SELECT` query as before: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT * FROM bank.accounts; - ~~~ - - ~~~ - id | balance - +----+---------+ - 1 | 1000.50 - (1 row) - ~~~ - - As you can see, node 1 and node 2 behaved identically as SQL gateways. - -5. Exit the SQL shell on node 2: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -## Step 4. Run a sample workload - -CockroachDB also comes with a number of [built-in workloads](cockroach-workload.html) for simulating client traffic. Let's run the workload based on CockroachDB's sample vehicle-sharing application, [MovR](movr.html). - -1. Load the initial dataset: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ docker exec -it roach1 ./cockroach workload init movr \ - 'postgresql://root@roach1:26257?sslmode=disable' - ~~~ - -2. Run the workload for 5 minutes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ docker exec -it roach1 ./cockroach workload run movr \ - --duration=5m \ - 'postgresql://root@roach1:26257?sslmode=disable' - ~~~ - -## Step 5. Access the DB Console - -The CockroachDB [DB Console](ui-overview.html) gives you insight into the overall health of your cluster as well as the performance of the client workload. - -1. When you started the first container/node, you mapped the node's default HTTP port `8080` to port `8080` on the host, so go to http://localhost:8080. - -2. On the [**Cluster Overview**](ui-cluster-overview-page.html), notice that three nodes are live, with an identical replica count on each node: - - DB Console - - This demonstrates CockroachDB's [automated replication](demo-replication-and-rebalancing.html) of data via the Raft consensus protocol. - - {{site.data.alerts.callout_info}} - Capacity metrics can be incorrect when running multiple nodes on a single machine. For more details, see this [limitation](known-limitations.html#available-capacity-metric-in-the-db-console). - {{site.data.alerts.end}} - -3. Click [**Metrics**](ui-overview-dashboard.html) to access a variety of time series dashboards, including graphs of SQL queries and service latency over time: - - DB Console - -4. Use the [**Databases**](ui-databases-page.html), [**Statements**](ui-statements-page.html), and [**Jobs**](ui-jobs-page.html) pages to view details about your databases and tables, to assess the performance of specific queries, and to monitor the status of long-running operations like schema changes, respectively. - -## Step 6. Stop the cluster - -Use the `docker stop` and `docker rm` commands to stop and remove the containers (and therefore the cluster): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ docker stop roach1 roach2 roach3 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ docker rm roach1 roach2 roach3 -~~~ - -If you do not plan to restart the cluster, you may want to remove the Docker volumes: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ docker volume rm roach1 roach2 roach3 -~~~ diff --git a/src/current/_includes/v22.1/topology-patterns/fundamentals.md b/src/current/_includes/v22.1/topology-patterns/fundamentals.md deleted file mode 100644 index 5e8103ed05d..00000000000 --- a/src/current/_includes/v22.1/topology-patterns/fundamentals.md +++ /dev/null @@ -1,6 +0,0 @@ -- Multi-region topology patterns are almost always table-specific. If you haven't already, [review the full range of patterns](topology-patterns.html#multi-region) to ensure you choose the right one for each of your tables. -- Review how data is replicated and distributed across a cluster, and how this affects performance. It is especially important to understand the concept of the "leaseholder". For a summary, see [Reads and Writes in CockroachDB](architecture/reads-and-writes-overview.html). For a deeper dive, see the CockroachDB [Architecture Overview](architecture/overview.html). -- Review the concept of [locality](cockroach-start.html#locality), which CockroachDB uses to place and balance data based on how you define [replication controls](configure-replication-zones.html). -- Review the recommendations and requirements in our [Production Checklist](recommended-production-settings.html). -- This topology doesn't account for hardware specifications, so be sure to follow our [hardware recommendations](recommended-production-settings.html#hardware) and perform a POC to size hardware for your use case. For optimal cluster performance, Cockroach Labs recommends that all nodes use the same hardware and operating system. -- Adopt relevant [SQL Best Practices](performance-best-practices-overview.html) to ensure optimal performance. diff --git a/src/current/_includes/v22.1/topology-patterns/multi-region-cluster-setup.md b/src/current/_includes/v22.1/topology-patterns/multi-region-cluster-setup.md deleted file mode 100644 index f6045f533b6..00000000000 --- a/src/current/_includes/v22.1/topology-patterns/multi-region-cluster-setup.md +++ /dev/null @@ -1,27 +0,0 @@ -Each [multi-region pattern](topology-patterns.html#multi-region) assumes the following setup: - -Multi-region hardware setup - -#### Hardware - -- 3 regions -- Per region, 3+ AZs with 3+ VMs evenly distributed across them -- Region-specific app instances and load balancers - - Each load balancer redirects to CockroachDB nodes in its region. - - When CockroachDB nodes are unavailable in a region, the load balancer redirects to nodes in other regions. - -#### Cluster startup - -Start each node with the [`--locality`](cockroach-start.html#locality) flag specifying its region and AZ combination. For example, the following command starts a node in the `west1` AZ of the `us-west` region: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---locality=region=us-west,zone=west1 \ ---certs-dir=certs \ ---advertise-addr= \ ---join=:26257,:26257,:26257 \ ---cache=.25 \ ---max-sql-memory=.25 \ ---background -~~~ diff --git a/src/current/_includes/v22.1/topology-patterns/multiregion-db-setup.md b/src/current/_includes/v22.1/topology-patterns/multiregion-db-setup.md deleted file mode 100644 index 48e3da4d9eb..00000000000 --- a/src/current/_includes/v22.1/topology-patterns/multiregion-db-setup.md +++ /dev/null @@ -1,34 +0,0 @@ -1. Create a database and set it as the default database: - - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE DATABASE test; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - USE test; - ~~~ - - [This cluster is already deployed across three regions](#cluster-setup). Therefore, to make this database a "multi-region database", issue the following SQL statement to [set the primary region](add-region.html#set-the-primary-region): - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER DATABASE test PRIMARY REGION "us-east"; - ~~~ - - {{site.data.alerts.callout_info}} - Every multi-region database must have a primary region. For more information, see [Database regions](multiregion-overview.html#database-regions). - {{site.data.alerts.end}} - -1. Issue the following [`ADD REGION`](add-region.html) statements to add the remaining regions to the database: - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER DATABASE test ADD REGION "us-west"; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER DATABASE test ADD REGION "us-central"; - ~~~ diff --git a/src/current/_includes/v22.1/topology-patterns/multiregion-fundamentals.md b/src/current/_includes/v22.1/topology-patterns/multiregion-fundamentals.md deleted file mode 100644 index 1e4d07e0fa9..00000000000 --- a/src/current/_includes/v22.1/topology-patterns/multiregion-fundamentals.md +++ /dev/null @@ -1,19 +0,0 @@ -Multi-region patterns require thinking about the following questions: - -- What are your [survival goals](multiregion-overview.html#survival-goals)? Do you need to survive a [zone failure](multiregion-overview.html#surviving-zone-failures)? Do you need to survive a [region failure](multiregion-overview.html#surviving-region-failures)? -- What are the [table localities](multiregion-overview.html#table-locality) that will provide the performance characteristics you need for each table's data? - - Do you need low-latency reads and writes from a single region? Do you need that single region to be configurable at the [row level](multiregion-overview.html#regional-by-row-tables)? Or will [a single optimized region for the entire table](multiregion-overview.html#regional-tables) suffice? - - Do you have a "read-mostly" [table of reference data that is rarely updated](multiregion-overview.html#global-tables), but that must be read with low latency from all regions? - -For more information about CockroachDB multi-region capabilities, review the following pages: - -- [Multi-Region Capabilities Overview](multiregion-overview.html) -- [How to Choose a Multi-Region Configuration](choosing-a-multi-region-configuration.html) -- [When to use `ZONE` vs. `REGION` Survival Goals](when-to-use-zone-vs-region-survival-goals.html) -- [When to use `REGIONAL` vs. `GLOBAL` Tables](when-to-use-regional-vs-global-tables.html) - -In addition, reviewing the following information will be helpful: - -- The concept of [locality](cockroach-start.html#locality), which CockroachDB uses to place and balance data based on how you define survival goal and table locality settings. -- The recommendations in our [Production Checklist](recommended-production-settings.html), including our [hardware recommendations](recommended-production-settings.html#hardware). Afterwards, perform a proof of concept to size hardware for your use case. -- [SQL Performance Best Practices](performance-best-practices-overview.html) diff --git a/src/current/_includes/v22.1/topology-patterns/see-also.md b/src/current/_includes/v22.1/topology-patterns/see-also.md deleted file mode 100644 index e0b207b6a3e..00000000000 --- a/src/current/_includes/v22.1/topology-patterns/see-also.md +++ /dev/null @@ -1,15 +0,0 @@ -- [Multi-Region Capabilities Overview](multiregion-overview.html) -- [How to Choose a Multi-Region Configuration](choosing-a-multi-region-configuration.html) -- [When to Use `ZONE` vs. `REGION` Survival Goals](when-to-use-zone-vs-region-survival-goals.html) -- [When to Use `REGIONAL` vs. `GLOBAL` Tables](when-to-use-regional-vs-global-tables.html) -- [Low Latency Reads and Writes in a Multi-Region Cluster](demo-low-latency-multi-region-deployment.html) -- [Migrate to Multi-Region SQL](migrate-to-multiregion-sql.html) -- [Topology Patterns Overview](topology-patterns.html) - - Single-region patterns - - [Development](topology-development.html) - - [Basic Production](topology-basic-production.html) - - Multi-region patterns - - [`REGIONAL` Tables](regional-tables.html) - - [`GLOBAL` Tables](global-tables.html) - - [Follower Reads](topology-follower-reads.html) - - [Follow-the-Workload](topology-follow-the-workload.html) diff --git a/src/current/_includes/v22.1/ui-custom-chart-debug-page-00.html b/src/current/_includes/v22.1/ui-custom-chart-debug-page-00.html deleted file mode 100644 index 36e0764df99..00000000000 --- a/src/current/_includes/v22.1/ui-custom-chart-debug-page-00.html +++ /dev/null @@ -1,109 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- Column - - Description -
- Metric Name - - How the system refers to this metric, e.g., sql.bytesin. -
- Downsampler - -

- The "Downsampler" operation is used to combine the individual datapoints over the longer period into a single datapoint. We store one data point every ten seconds, but for queries over long time spans the backend lowers the resolution of the returned data, perhaps only returning one data point for every minute, five minutes, or even an entire hour in the case of the 30 day view. -

-

- Options: -

    -
  • AVG: Returns the average value over the time period.
  • -
  • MIN: Returns the lowest value seen.
  • -
  • MAX: Returns the highest value seen.
  • -
  • SUM: Returns the sum of all values seen.
  • -
-

-
- Aggregator - -

- Used to combine data points from different nodes. It has the same operations available as the Downsampler. -

-

- Options: -

    -
  • AVG: Returns the average value over the time period.
  • -
  • MIN: Returns the lowest value seen.
  • -
  • MAX: Returns the highest value seen.
  • -
  • SUM: Returns the sum of all values seen.
  • -
-

-
- Rate - -

- Determines how to display the rate of change during the selected time period. -

-

- Options: - -

    -
  • - Normal: Returns the actual recorded value. -
  • -
  • - Rate: Returns the rate of change of the value per second. -
  • -
  • - Non-negative Rate: Returns the rate-of-change, but returns 0 instead of negative values. A large number of the stats we track are actually tracked as monotonically increasing counters so each sample is just the total value of that counter. The rate of change of that counter represents the rate of events being counted, which is usually what you want to graph. "Non-negative Rate" is needed because the counters are stored in memory, and thus if a node resets it goes back to zero (whereas normally they only increase). -
  • -
-

-
- Source - - The set of nodes being queried, which is either: -
    -
  • - The entire cluster. -
  • -
  • - A single, named node. -
  • -
-
- Per Node - - If checked, the chart will show a line for each node's value of this metric. -
diff --git a/src/current/_includes/v22.1/ui/admin-access.md b/src/current/_includes/v22.1/ui/admin-access.md deleted file mode 100644 index ff6844e27bc..00000000000 --- a/src/current/_includes/v22.1/ui/admin-access.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -On a [secure cluster](secure-a-cluster.html) you must be an `admin` user to access this area of the DB Console. See [DB Console security](ui-overview.html#db-console-access). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/ui/cpu-percent-graph.md b/src/current/_includes/v22.1/ui/cpu-percent-graph.md deleted file mode 100644 index 202bf6684e7..00000000000 --- a/src/current/_includes/v22.1/ui/cpu-percent-graph.md +++ /dev/null @@ -1,15 +0,0 @@ -DB Console CPU Percent graph - -{{site.data.alerts.callout_info}} -This graph shows the CPU consumption by the CockroachDB process, and excludes other processes on the node. Use a separate monitoring tool to measure the total CPU consumption across all processes. -{{site.data.alerts.end}} - -- In the node view, the graph shows the percentage of CPU in use by the CockroachDB process for the selected node. - -- In the cluster view, the graph shows the percentage of CPU in use by the CockroachDB process across all nodes. - -{% include {{ page.version.version }}/prod-deployment/healthy-cpu-percent.md %} - -{{site.data.alerts.callout_info}} -For multi-core systems, the percentage of CPU usage is calculated by normalizing the CPU usage across all cores, whereby 100% utilization indicates that all cores are fully utilized. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/ui/databases.md b/src/current/_includes/v22.1/ui/databases.md deleted file mode 100644 index fbef68a26b0..00000000000 --- a/src/current/_includes/v22.1/ui/databases.md +++ /dev/null @@ -1,73 +0,0 @@ -## Databases - -The **Databases** page shows: - -- Whether [automatic statistics collection]({{ link_prefix }}cost-based-optimizer.html#table-statistics) is enabled for the cluster. -- A list of the databases on the cluster. -{% if page.cloud == true %} -- The **Add database** button, which allows you to [create a new database](serverless-cluster-management.html#create-a-database). -{% endif %} - -The following information is displayed for each database: - -| Column | Description | -|---------------|-------------------------------------------------------------------------------------------------------------------------| -| Databases | The name of the database. | -{% if page.cloud != true -%} -| Size | Approximate disk size across all table replicas in the database. | -{% endif -%} -| Tables | The number of tables in the database. | -{% if page.cloud != true -%} -| Range Count | The number of ranges across all tables in the database. | -| Regions/Nodes | The regions and nodes on which the tables in the database are located. This is not displayed on a single-node cluster. | -{% endif -%} - -Click a **database name** to open the **Tables** page. - -- Select **View: Tables** in the pulldown menu to display the [Tables view](#tables-view). -- Select **View: Grants** in the pulldown menu to display the [Grants view](#grants-view). - -## Tables view - -The **Tables** view shows the tables in your database. - -The following information is displayed for each table: - -| Column | Description | -|--------------------------------|----------------------------------------------------------------------------------------------------------| -| Tables | The name of the table. | -{% if page.cloud != true -%} -| Replication Size | The approximate disk size of all replicas of this table on the cluster. | -| Ranges | The number of ranges in the table. | -{% endif -%} -| Columns | The number of columns in the table. | -| Indexes | The number of indexes in the table. | -{% if page.cloud != true -%} -| Regions | The regions and nodes on which the table data is stored. This is not displayed on a single-node cluster. | -{% endif -%} -| Table Stats Last Updated (UTC) | The last time table statistics were created or updated. | - -Click a **table name** to view table details. - -### Table details - -The table details page contains details of a table. It contains an **Overview** tab and a **Grants** tab displays the users and [grants]({{ link_prefix }}grant.html) associated with the table. - -#### Overview tab - -The **Overview** tab displays the SQL statements used to [create the table]({{ link_prefix }}create-table.html), table details, and index statistics. - -The table details include: - -{% if page.cloud != true %} -- **Size**: The approximate disk size of all replicas of this table on the cluster. -- **Replicas**: The number of [replicas]({{ link_prefix }}architecture/replication-layer.html) of this table on the cluster. -- **Ranges**: The number of [ranges]({{ link_prefix }}architecture/glossary.html#architecture-range) in this table. -- **Table Stats Last Updated**: The last time table statistics were created or updated. -{% endif %} -- **Auto Stats Collection**: Whether [automatic statistics collection]({{ link_prefix }}cost-based-optimizer.html#table-statistics) is enabled. -{% if page.cloud != true %} -- **Regions/Nodes**: The regions and nodes on which the table data is stored. This is not displayed on a single-node cluster. -{% endif %} -- **Database**: The database in which the table is found. -- **Indexes**: The names of the indexes defined on the table. diff --git a/src/current/_includes/v22.1/ui/index-details.md b/src/current/_includes/v22.1/ui/index-details.md deleted file mode 100644 index aa1a9f56c61..00000000000 --- a/src/current/_includes/v22.1/ui/index-details.md +++ /dev/null @@ -1,43 +0,0 @@ -#### Index details - -The **Index Stats** table displays index statistics for a table. - -Index statistics accumulate from the time an index was created or when statistics were reset. If desired, [admin users]({{ link_prefix }}security-reference/authorization.html#admin-role) may reset index statistics for the cluster by clicking **Reset all index stats**. This link does not appear for non-admin users. - -The following information is displayed for each index: - -| Column | Description | -|------------------|----------------------------------------------------------------------------| -| Indexes | The name of the index. | -| Total Reads | The number of times the index was read since index statistics were reset. | -| Last Used (UTC) | The time the index was created, last read, or index statistics were reset. | - -{% if page.cloud != true %} -Click an **index name** to view index details. The index details page displays the query used to create the index, the number of times the index was read since index statistics were reset, and the time the index was last read. -{% endif %} - -## Grants view - -The **Grants** view shows the [privileges]({{ link_prefix }}security-reference/authorization.html#managing-privileges) granted to users and roles for each database. - -The following information is displayed for each table: - -| Column | Description | -|------------|-----------------------------------| -{% if page.cloud != true -%} -| Tables | The name of the table. | -{% endif -%} -| Users | The number of users of the table. | -{% if page.cloud != true -%} -| Roles | The list of roles on the table. | -{% endif -%} -| Grants | The list of grants of the table. | - -For more details about grants and privileges, see [`GRANT`]({{ link_prefix }}grant.html). - -## See also - -- [Statements page]({{ link_prefix }}ui-statements-page.html) -- [Assign privileges]({{ link_prefix }}security-reference/authorization.html#managing-privileges) -- [`GRANT`]({{ link_prefix }}grant.html) -- [Raw status endpoints]({{ link_prefix }}monitoring-and-alerting.html#raw-status-endpoints) diff --git a/src/current/_includes/v22.1/ui/logical-bytes.md b/src/current/_includes/v22.1/ui/logical-bytes.md deleted file mode 100644 index e85f04cea92..00000000000 --- a/src/current/_includes/v22.1/ui/logical-bytes.md +++ /dev/null @@ -1 +0,0 @@ -Logical bytes reflect the approximate number of bytes stored in the database. This value may deviate from the number of physical bytes on disk, due to factors such as compression and [write amplification](https://en.wikipedia.org/wiki/Write_amplification). \ No newline at end of file diff --git a/src/current/_includes/v22.1/ui/runnable-goroutines-graph.md b/src/current/_includes/v22.1/ui/runnable-goroutines-graph.md deleted file mode 100644 index 0f6b6d45a79..00000000000 --- a/src/current/_includes/v22.1/ui/runnable-goroutines-graph.md +++ /dev/null @@ -1,5 +0,0 @@ -This graph shows the number of [Goroutines](https://golangbot.com/goroutines/) waiting to run per CPU. This graph should rise and fall based on CPU load. Values greater than 50 are considered high. - -- In the node view, the graph shows the number of Goroutines waiting per CPU on the selected node. - -- In the cluster view, the graph shows the number of Goroutines waiting per CPU across all nodes in the cluster. \ No newline at end of file diff --git a/src/current/_includes/v22.1/ui/sessions.md b/src/current/_includes/v22.1/ui/sessions.md deleted file mode 100644 index 5adc16f47cb..00000000000 --- a/src/current/_includes/v22.1/ui/sessions.md +++ /dev/null @@ -1,78 +0,0 @@ -{% if page.cloud != true %} -Sessions Page -{% endif %} - -{% include_cached new-in.html version="v22.1" %} To filter the sessions, click the **Filters** field. - -Session filter - -To filter by [application]({{ link_prefix }}connection-parameters.html#additional-connection-parameters), select **App** and choose one or more applications. - -- Queries from the SQL shell are displayed under the `$ cockroach` app. -- If you haven't set `application_name` in a client connection string, it appears as `unset`. - -To filter by session duration, specify the session time and unit. - -{% include_cached new-in.html version="v22.1" %} Click Column selector to select the columns to display in the table. - -The following are displayed for each session: - -Column | Description ---------- | ----------- -Session Start Time (UTC) | **New in v22.1:** The timestamp at which the session started. -Session Duration | The amount of time the session has been open. -Status | The status of the session: Active or Idle. A session is Active if it has an open explicit or implicit transaction (individual SQL statement) with a statement that is actively running or waiting to acquire a lock. A session is Idle if it is not executing a statement. -Most Recent Statement | **New in v22.1:** If more than one statement is executing, the most recent statement. If the session is Idle, the last statement. -Statement Start Time (UTC) | **New in v22.1:** The timestamp at which the statement started. -Memory Usage | Amount of memory currently allocated to the session followed by the maximum amount of memory the session has ever been allocated. -Client IP Address | **New in v22.1:** The IP address and port of the client that opened the session. -User Name | **New in v22.1:** The user that opened the session. -Application Name | **New in v22.1:** The application that ran the session. -Actions | Options to cancel the active statement and cancel the session. These require the `CANCELQUERY` [role option]({{ link_prefix }}alter-role.html#role-options).
  • **Cancel Statement:** Ends the SQL statement. The session running this statement will receive an error.
  • **Cancel Session:** Ends the session. The client that holds this session will receive a "connection terminated" event.
- -To view details of a session, click a **Session Start Time (UTC)** to display session details. - -## Session details - -If a session is idle, the **Transaction** and **Most Recent Statement** panels will display **No Active [Transaction | Statement]**. - -{% if page.cloud != true %} -Sessions Details Page -{% endif %} - -The **Cancel statement** button ends the SQL statement. The session running this statement will receive an error. -The **Cancel session** button ends the session. The client that holds this session will receive a "connection terminated" event. - -- **Session Details** - - **Session Start Time** shows the timestamp at which the session started. - - **Gateway Node** shows the node ID and IP address/port of the [gateway]({{ link_prefix }}architecture/life-of-a-distributed-transaction.html#gateway) node handling the client connection. - - **Application Name** {% include_cached new-in.html version="v22.1" %} shows the name of the application connected to the session. - - **Client IP Address** shows the IP address/port of the client that opened the session. - - **Memory Usage** shows the amount of memory currently allocated to this session, followed by the maximum amount of memory this session has ever allocated. - - **User Name** {% include_cached new-in.html version="v22.1" %} displays the name of the user that started the session. - -- **Transaction** displays the following information for an open transaction. - - **Transaction Start Time** shows the timestamp at which the transaction started. - - **Number of Statements Executed** shows the total number of SQL statements executed by the transaction. - - **Number of Retries** shows the total number of [retries]({{ link_prefix }}transactions.html#transaction-retries) for the transaction. - - **Number of Automatic Retries** shows the total number of [automatic retries]({{ link_prefix }}transactions.html#automatic-retries) run by CockroachDB for the transaction. - - **Read Only?** shows whether the transaction is read-only. - - **AS OF SYSTEM TIME?** shows whether the transaction uses [`AS OF SYSTEM TIME`]({{ link_prefix }}performance-best-practices-overview.html#use-as-of-system-time-to-decrease-conflicts-with-long-running-queries) to return historical data. - - **Priority** shows the [priority]({{ link_prefix }}transactions.html#transaction-priorities) for the transaction. - - **Memory Usage** shows the amount of memory currently allocated to this transaction, followed by the maximum amount of memory this transaction has ever allocated. - -- **Most Recent Statement** displays the following information for an active statement. - - The SQL statement. - - **Execution Start Time** is the timestamp at which the statement was run. - - **Distributed Execution?** shows whether the statement uses [Distributed SQL (DistSQL)]({{ link_prefix }}architecture/sql-layer.html#distsql) optimization. - -## See also - -- [`SHOW SESSIONS`]({{ link_prefix }}show-sessions.html) -- [Statements page]({{ page_prefix }}statements-page.html) -- [SQL Statements]({{ link_prefix }}sql-statements.html) -- [Transactions]({{ link_prefix }}transactions.html) -- [Transaction Error Retry Reference]({{ link_prefix }}transaction-retry-error-reference.html) -{% if page.cloud != true %} -- [Production Checklist](recommended-production-settings.html#hardware) -{% endif %} diff --git a/src/current/_includes/v22.1/ui/statement-details.md b/src/current/_includes/v22.1/ui/statement-details.md deleted file mode 100644 index c10427394a8..00000000000 --- a/src/current/_includes/v22.1/ui/statement-details.md +++ /dev/null @@ -1,114 +0,0 @@ - - -## Statement Fingerprint page - -The details displayed on the **Statement Fingerprint** page reflect the [time interval](#time-interval) selected on the **Statements** page. - -### Overview - -The **Overview** section displays the SQL statement fingerprint and execution attributes: - -- **Nodes**: the nodes on which the statements executed. Click a node ID to view node statistics. **Nodes** are not displayed for CockroachDB {{ site.data.products.serverless }} clusters. -- **Regions**: the regions on which the statements executed. **Regions** are not displayed for CockroachDB {{ site.data.products.serverless }} clusters. -- **Database**: the database on which the statements executed. -- **App**: the name specified by the [`application_name`]({{ link_prefix }}show-vars.html#supported-variables) session setting. Click the name to view all statements run by that application. -- **Failed?**: whether the statement failed to execute. -- **Full scan?**: whether the execution performed a full scan of the table. -- **Vectorized execution?**: whether the execution used the [vectorized execution engine]({{ link_prefix }}vectorized-execution.html). -- **Transaction type**: the type of transaction ([implicit]({{ link_prefix }}transactions.html#individual-statements) or [explicit]({{ link_prefix }}transactions.html#sql-statements)). -- **Last execution time**: when the statement was last executed. - -The following screenshot shows the statement fingerprint of the query described in [Use the right index]({{ link_prefix }}apply-statement-performance-rules.html#rule-2-use-the-right-index): - -Statement fingerprint overview - -#### Charts - -{% include_cached new-in.html version="v22.1.3" %} Charts following the execution attributes display statement fingerprint statistics: - -- **Statement Execution and Planning Time**: the time taken by the [planner]({{ link_prefix }}architecture/sql-layer.html#sql-parser-planner-executor) to create an execution plan and for CockroachDB to execute statements. -- **Rows Processed**: the total number of rows read and written. -- **Execution Retries**: the number of [retries]({{ link_prefix }}transactions.html#transaction-retries). -- **Execution Count**: the total number of executions. It is calculated as the sum of first attempts and retries. -- **Contention**: the amount of time spent waiting for resources. For more information about contention, see [Understanding and avoiding transaction contention]({{ link_prefix }}performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention). - -The following charts summarize the executions of the statement fingerprint illustrated in the preceding section: - -Statement fingerprint charts - -### Explain Plans - -{% include_cached new-in.html version="v22.1" %} The **Explain Plans** tab displays statement plans for an [explainable statement]({{ link_prefix }}sql-grammar.html#preparable_stmt) in the selected [time interval](#time-interval). You can use this information to optimize the query. For more information about plans, see [`EXPLAIN`]({{ link_prefix }}explain.html). - -The following screenshot shows two executions of the query discussed in the preceding sections: - -Plan table - -The plan table shows statistics for the execution and whether the execution was distributed or used the [vectorized execution engine]({{ link_prefix }}vectorized-execution.html). In the screenshot, the **Average Execution Time** column show that the second execution at `20:37`, which uses the index, takes less time than the first execution. - -To display the plan that was executed, click a plan ID. When you click the plan ID `13182663282122740000`, the following plan displays: - -Plan table - -### Diagnostics - -The **Diagnostics** tab allows you to activate and download diagnostics for a SQL statement fingerprint. - -{{site.data.alerts.callout_info}} -The **Diagnostics** tab is not visible: - -- On CockroachDB {{ site.data.products.serverless }} clusters. -- For roles with the `VIEWACTIVITYREDACTED` [role option]({{ link_prefix }}alter-role.html#role-options). -{{site.data.alerts.end}} - -When you activate diagnostics for a fingerprint, CockroachDB waits for the next SQL query that matches this fingerprint to be run on any node. On the next match, information about the SQL statement is written to a diagnostics bundle that you can download. This bundle consists of [statement traces]({{ link_prefix }}show-trace.html) in various formats (including a JSON file that can be [imported to Jaeger]({{ link_prefix }}query-behavior-troubleshooting.html#visualize-statement-traces-in-jaeger)), a physical query plan, execution statistics, and other information about the query. The bundle contents are identical to those produced by [`EXPLAIN ANALYZE (DEBUG)`]({{ link_prefix }}explain-analyze.html#debug-option). You can use the information collected in the bundle to diagnose problematic SQL statements, such as [slow queries]({{ link_prefix }}query-behavior-troubleshooting.html#query-is-always-slow). We recommend that you share the diagnostics bundle with our [support team]({{ link_prefix }}support-resources.html), which can help you interpret the results. - -{{site.data.alerts.callout_success}} -Diagnostics will be collected a maximum of *N* times for a given activated fingerprint where *N* is the number of nodes in your cluster. -{{site.data.alerts.end}} - -{% include common/sql/statement-bundle-warning.md %} - -#### Activate diagnostics collection and download bundles - -Statements diagnostics - -To activate diagnostics collection: - -1. Click the **Activate diagnostics** button. {% include_cached new-in.html version="v22.1" %} The **Activate statement diagnostics** dialog displays. - - Statements diagnostics - -1. Choose whether to activate collection on the next statement execution (default) or if execution latency exceeds a certain time. If you choose the latter, accept the default latency of 100 milliseconds, or specify a different time. All executions of the statement fingerprint will run slower until diagnostics are collected. -1. Choose whether the request should expire after 15 minutes, or after a different the time, or disable automatic expiration by deselecting the checkbox. -1. Click **Activate**. - -A row with the activation time and collection status is added to the **Statement diagnostics** table. - -Statement diagnostics - -The collection status values are: - -- **READY**: indicates that the diagnostics have been collected. To download the diagnostics bundle, click Download bundle **Bundle (.zip)**. -- **WAITING**: indicates that a SQL statement matching the fingerprint has not yet been recorded. {% include_cached new-in.html version="v22.1" %} To cancel diagnostics collection, click the **Cancel request** button. -- **ERROR**: indicates that the attempt at diagnostics collection failed. - -#### View and download diagnostic bundles for all statement fingerprints - -Although fingerprints are periodically cleared from the Statements page, all diagnostics bundles are preserved. To view and download diagnostic bundles for all statement fingerprints, do one of the following: - -- On the **Diagnostics** tab for a statement fingerprint, click the **All statement diagnostics** link. -{% if page.cloud != true %} -- Click **Advanced Debug** in the left-hand navigation and click [Statement Diagnostics History](ui-debug-pages.html#reports). -{% endif %} - -Click Download bundle **Bundle (.zip)** to download any diagnostics bundle. - -## See also - -- [Troubleshoot Query Behavior]({{ link_prefix }}query-behavior-troubleshooting.html) -- [Transaction retries]({{ link_prefix }}transactions.html#transaction-retries) -- [Optimize Statement Performance]({{ link_prefix }}make-queries-fast.html) -- [Support Resources]({{ link_prefix }}support-resources.html) -- [Raw Status Endpoints]({{ link_prefix }}monitoring-and-alerting.html#raw-status-endpoints) -- [Transactions Page]({{ page_prefix }}transactions-page.html) diff --git a/src/current/_includes/v22.1/ui/statements-filter.md b/src/current/_includes/v22.1/ui/statements-filter.md deleted file mode 100644 index 3989802de08..00000000000 --- a/src/current/_includes/v22.1/ui/statements-filter.md +++ /dev/null @@ -1,36 +0,0 @@ -{% if page.cloud == true %} - {% capture link_prefix %}../{{site.current_cloud_version}}/{% endcapture %} - {% assign page_prefix = "" %} -{% else %} - {% assign link_prefix = "" %} - {% assign page_prefix = "ui-" %} -{% endif %} - -### Time interval - -To view [statement fingerprints](#sql-statement-fingerprints) within a specific time interval, click the time interval selector and pick an interval. The time interval field supports preset time intervals (1 Hour, 6 Hours, 1 Day, etc.) and custom time intervals. To select a custom time interval, click the time interval field and select **Custom time interval**. In the **Start (UTC)** and **End (UTC)** fields select or type a date and time. - -Use the arrow keys to cycle through previous and next time intervals. When you select a time interval, the same interval is selected in the [Metrics]({{ link_prefix }}ui-overview.html#metrics) page. - -It's possible to select an interval for which no statement statistics exist. CockroachDB persists statement statistics up to 1 million rows before the oldest row is deleted. The retention period of statistics is reduced the more active a workload is and the more distinct statement fingerprints there are. - -### Filter - -To filter the statements: - -1. Click the **Filters** field. - - To filter by [application]({{ link_prefix }}connection-parameters.html#additional-connection-parameters), select **App** and select one or more applications. - - - Queries from the SQL shell are displayed under the `$ cockroach` app. - - If you haven't set `application_name` in a client connection string, it appears as `unset`. - - To filter by one or more databases (**Database**), SQL statement types (**Statement Type**), or nodes on which the statement ran (**Node**), click the field and select one or more checkboxes. - - The **Statement Type** values map to the CockroachDB statement types [data definition language (DDL)]({{ link_prefix }}sql-statements.html#data-definition-statements), [data manipulation language (DML)]({{ link_prefix }}sql-statements.html#data-manipulation-statements), [data control language (DCL)]({{ link_prefix }}sql-statements.html#data-control-statements), and [transaction control language (TCL)]({{ link_prefix }}sql-statements.html#transaction-control-statements). - - To display only statement fingerprints that take longer than a specified time to run, specify the time and units. - - To display only statement fingerprints with queries that cause full table scans, click **Only show statements that contain queries with full table scans**. - -1. Click **Apply**. - -The following screenshot shows the statements that contain the string `rides` for the `movr` application: - -Movr rides statements diff --git a/src/current/_includes/v22.1/ui/statements-table.md b/src/current/_includes/v22.1/ui/statements-table.md deleted file mode 100644 index 6b1cc11429b..00000000000 --- a/src/current/_includes/v22.1/ui/statements-table.md +++ /dev/null @@ -1,29 +0,0 @@ -## Statements table - -Click Column selector to select the columns to display in the table. - -The Statements table gives details for each SQL statement fingerprint: - -Column | Description ------|------------ -Statements | SQL statement [fingerprint](#sql-statement-fingerprints). To view additional details, click the SQL statement fingerprint to open its [Statement Fingerprint page]({{ page_prefix }}statements-page.html#statement-fingerprint-page). -Execution Count | Cumulative number of executions of statements with this fingerprint within the [time interval](#time-interval).

The bar indicates the ratio of runtime success (gray) to [retries]({{ link_prefix }}transactions.html#transaction-retries) (red) for the SQL statement fingerprint. -Database | The database in which the statement was executed. -Rows Processed | **New in v22.1.3:** Average number of rows read and written while executing statements with this fingerprint within the time interval. -Bytes Read | Aggregation of all bytes [read from disk]({{ link_prefix }}architecture/life-of-a-distributed-transaction.html#reads-from-the-storage-layer) across all operators for statements with this fingerprint within the time interval.

The gray bar indicates the mean number of bytes read from disk. The blue bar indicates one standard deviation from the mean. Hover over the bar to display exact values. -Statement Time | Average [planning and execution time]({{ link_prefix }}architecture/sql-layer.html#sql-parser-planner-executor) of statements with this statement fingerprint within the time interval.

The gray bar indicates the mean latency. The blue bar indicates one standard deviation from the mean. Hover over the bar to display exact values. -Contention | Average time statements with this fingerprint were [in contention]({{ link_prefix }}performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention) with other transactions within the time interval.

The gray bar indicates mean contention time. The blue bar indicates one standard deviation from the mean. Hover over the bar to display exact values. -Max Memory | Maximum memory used by a statement with this fingerprint at any time during its execution within the time interval.

The gray bar indicates the average max memory usage. The blue bar indicates one standard deviation from the mean. Hover over the bar to display exact values. -Network | Amount of [data transferred over the network]({{ link_prefix }}architecture/reads-and-writes-overview.html) for statements with this fingerprint within the time interval. If this value is 0, the statement was executed on a single node.

The gray bar indicates the mean number of bytes sent over the network. The blue bar indicates one standard deviation from the mean. Hover over the bar to display exact values. -Retries | Cumulative number of automatic (internal) [retries]({{ link_prefix }}transactions.html#transaction-retries) by CockroachDB of statements with this fingerprint within the time interval. -% of All Runtime | How much time this statement fingerprint took to execute compared to all other statements that were executed within the time period. It is expressed as a percentage. The runtime is the mean execution latency multiplied by the execution count. -Regions/Nodes | The regions and nodes on which statements with this fingerprint executed.

Regions/Nodes is not visible for CockroachDB {{ site.data.products.serverless }} clusters. -Diagnostics | Activate and download [diagnostics](#diagnostics) for this fingerprint. To activate, click the **Activate** button. **New in v22.1:** The [Activate statement diagnostics](#activate-diagnostics-collection-and-download-bundles) dialog displays. After you complete the dialog, the column displays the status of diagnostics collection (**WAITING**, **READY**, or **ERROR**). Click Bundle selector and select a bundle to download or select **Cancel request** to cancel diagnostics bundle collection.

Statements are periodically cleared from the Statements page based on the start time. To access the full history of diagnostics for the fingerprint, see the [Diagnostics](#diagnostics) tab of the Statement Details page.

Diagnostics is not visible for CockroachDB {{ site.data.products.serverless }} clusters. - -{{site.data.alerts.callout_info}} -To obtain the execution statistics, CockroachDB samples a percentage of the executions. If you see `no samples` displayed in the **Contention**, **Max Memory**, or **Network** columns, there are two possibilities: -- Your statement executed successfully but wasn't sampled because there were too few executions of the statement. -- Your statement has failed (the most likely case). You can confirm by clicking the statement and viewing the value for **Failed?**. -{{site.data.alerts.end}} - -To view statement details, click a SQL statement fingerprint in the **Statements** column to open the **Statement Fingerprint** page. diff --git a/src/current/_includes/v22.1/ui/statistics.md b/src/current/_includes/v22.1/ui/statistics.md deleted file mode 100644 index 84faedada84..00000000000 --- a/src/current/_includes/v22.1/ui/statistics.md +++ /dev/null @@ -1,7 +0,0 @@ -Statistics aggregation is controlled by the `sql.stats.aggregation.interval` [cluster setting]({{ link_prefix }}cluster-settings.html), set to 1 hour by default. - -Aggregated statistics are flushed from memory to statistics tables in the [`crdb_internal`]({{ link_prefix }}crdb-internal.html) system catalog every 10 minutes. The flushing interval is controlled by the `sql.stats.flush.interval` cluster setting. - -The default retention period of the statistics tables is based on the number of rows up to 1 million records. When this threshold is reached, the oldest records are deleted. The `diagnostics.forced_sql_stat_reset.interval` [cluster setting]({{ link_prefix }}cluster-settings.html) controls when persisted statistics are deleted only if the internal cleanup service experiences a failure. - -If desired, [admin users]({{ link_prefix }}security-reference/authorization.html#admin-role) may reset SQL statistics in the DB Console UI and `crdb_internal` system catalog by clicking **reset SQL stats**. This link does not appear for non-admin users. diff --git a/src/current/_includes/v22.1/ui/transaction-details.md b/src/current/_includes/v22.1/ui/transaction-details.md deleted file mode 100644 index 526dec6930d..00000000000 --- a/src/current/_includes/v22.1/ui/transaction-details.md +++ /dev/null @@ -1,26 +0,0 @@ -## Transaction Details page - -The details displayed on the **Transaction Details** page reflect the [time interval](#time-interval) selected on the **Transactions** page. - -- The _transaction fingerprint_ is displayed as a list of the individual [SQL statement fingerprints]({{ page_prefix }}statements-page.html#sql-statement-fingerprints) in the transaction. -- The **Mean transaction time**: the mean average time it took to execute the transaction within the aggregation interval. -- **Transaction resource usage** shows overall statistics about the transaction. - - **Mean rows/bytes read**: the mean average number of rows and bytes [read from the storage layer]({{ link_prefix }}architecture/life-of-a-distributed-transaction.html#reads-from-the-storage-layer) during the execution of the transaction within the specified aggregation interval. - - **Bytes read over network**: the amount of [data transferred over the network]({{ link_prefix }}architecture/reads-and-writes-overview.html) for this transaction within the aggregation interval.

If this value is 0, the statement was executed on a single node. - - **Mean rows written**: the mean number of rows written by this transaction. - - **Max memory usage**: the maximum memory used by this transaction at any time during its execution within the aggregation interval. - - **Max scratch disk usage**: the maximum amount of data [spilled to temporary storage on disk]({{ link_prefix }}vectorized-execution.html#disk-spilling-operations) while executing this transaction within the aggregation interval. - - -The [Statements page]({{ page_prefix }}statements-page.html) displays the statement fingerprints of all the statements in the transaction. To display the [details of a statement fingerprint]({{ page_prefix }}statements-page.html#statement-fingerprint-page), click a statement fingerprint. - -## See also - -- [Transactions]({{ link_prefix }}transactions.html) -- [Transaction Layer]({{ link_prefix }}architecture/transaction-layer.html) -- [Run Multi-Statement Transactions]({{ link_prefix }}run-multi-statement-transactions.html) -{% if page.cloud != true %} -- [Transaction latency graphs](ui-sql-dashboard.html#transactions) -{% endif %} -- [Transaction retries]({{ link_prefix }}transactions.html#transaction-retries) -- [Statements Page]({{ page_prefix }}statements-page.html) diff --git a/src/current/_includes/v22.1/ui/transactions-filter.md b/src/current/_includes/v22.1/ui/transactions-filter.md deleted file mode 100644 index 9ce106b5a2d..00000000000 --- a/src/current/_includes/v22.1/ui/transactions-filter.md +++ /dev/null @@ -1,31 +0,0 @@ -{% if page.cloud == true %} - {% capture link_prefix %}../{{site.current_cloud_version}}/{% endcapture %} - {% assign page_prefix = "" %} -{% else %} - {% assign link_prefix = "" %} - {% assign page_prefix = "ui-" %} -{% endif %} - -### Time interval - -To view [statement fingerprints]({{ page_prefix }}statements-page.html#sql-statement-fingerprints) within a specific time interval, click the time interval selector and pick an interval. The time interval field supports preset time intervals (1 Hour, 6 Hours, 1 Day, etc.) and custom time intervals. To select a custom time interval, click the time interval field and select **Custom time interval**. In the **Start (UTC)** and **End (UTC)** fields select or type a date and time. - -Use the arrow keys to cycle through previous and next time intervals. When you select a time interval, the same interval is selected in the [Metrics]({{ link_prefix }}ui-overview.html#metrics) page. - -It's possible to select an interval for which no transaction statistics exist. CockroachDB persists transaction statistics up to 1 million rows before the oldest row is deleted. The retention period of statistics is reduced the more active a workload is and the more distinct statement fingerprints there are. - -### Filter - -To filter the transactions: - -1. Click the **Filters** field. - - To filter by [application]({{ link_prefix }}connection-parameters.html#additional-connection-parameters), select **App** and select one or more applications. - - - Queries from the SQL shell are displayed under the `$ cockroach` app. - - If you haven't set `application_name` in a client connection string, it appears as `unset`. - - To filter by the nodes on which the transaction ran, click the **Node** field and select one or more checkboxes. - - To display only statement fingerprints that take longer than a specified time to run, specify the time and units. - -1. Click **Apply**. - -Movr rides transactions diff --git a/src/current/_includes/v22.1/ui/transactions-table.md b/src/current/_includes/v22.1/ui/transactions-table.md deleted file mode 100644 index 4e7ae2aa747..00000000000 --- a/src/current/_includes/v22.1/ui/transactions-table.md +++ /dev/null @@ -1,25 +0,0 @@ -## Transactions table - -Click Column selector to select the columns to display in the table. - -The Transactions table gives details for each SQL statement fingerprint in the transaction: - -Column | Description ------|------------ -Transactions | The [SQL statement fingerprints]({{ page_prefix }}statements-page.html#sql-statement-fingerprints) that make up the transaction. To view the transaction fingerprint and details, click to open the [Transaction Details page](#transaction-details-page). -Execution Count | Cumulative number of executions of this transaction within the [time interval](#time-interval).

The bar indicates the ratio of runtime success (gray) to [retries]({{ link_prefix }}transactions.html#transaction-retries) (red) for the transaction. -Rows Processed | Average number of rows read and written while executing statements with this fingerprint within the time interval. -Bytes Read | Aggregation of all bytes [read from disk]({{ link_prefix }}architecture/life-of-a-distributed-transaction.html#reads-from-the-storage-layer) across all operators for this transaction within the time interval.

The gray bar indicates the mean number of bytes read from disk. The blue bar indicates one standard deviation from the mean. -Transaction Time | Average [planning and execution time]({{ link_prefix }}architecture/sql-layer.html#sql-parser-planner-executor) of this transaction within the time interval.

The gray bar indicates the mean latency. The blue bar indicates one standard deviation from the mean. -Contention | Average time this transaction was [in contention]({{ link_prefix }}performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention) with other transactions within the time interval. -Max Memory | Maximum memory used by this transaction at any time during its execution within the time interval.

The gray bar indicates the average max memory usage. The blue bar indicates one standard deviation from the mean. -Network | Amount of [data transferred over the network]({{ link_prefix }}architecture/reads-and-writes-overview.html) for this transaction within the time interval.

If this value is 0, the transaction was executed on a single node.

The gray bar indicates the mean number of bytes sent over the network. The blue bar indicates one standard deviation from the mean. -Retries | Cumulative number of [retries]({{ link_prefix }}transactions.html#transaction-retries) of this transaction within the time interval. -Regions/Nodes | The region and nodes in which the transaction was executed.

**Regions/Nodes** are not visible for CockroachDB {{ site.data.products.serverless }} clusters. -Statements | Number of SQL statements in the transaction. - -{{site.data.alerts.callout_info}} -Significant transactions on your database are likely to have a high execution count or number of rows read. -{{site.data.alerts.end}} - -To view transaction details, click a transaction fingerprint in the **Transactions** column to open the **Transaction Details** page. diff --git a/src/current/_includes/v22.1/ui/ui-log-files.md b/src/current/_includes/v22.1/ui/ui-log-files.md deleted file mode 100644 index 0f3da9ef34c..00000000000 --- a/src/current/_includes/v22.1/ui/ui-log-files.md +++ /dev/null @@ -1,7 +0,0 @@ -Log files can be accessed using the DB Console, which displays them in JSON format. - -1. [Access the DB Console](ui-overview.html#db-console-access) and then click [**Advanced Debug**](ui-debug-pages.html) in the left-hand navigation. - -2. Under **Raw Status Endpoints (JSON)**, click **Log Files** to view the JSON of all collected logs. - -3. Copy one of the log filenames. Then click **Specific Log File** and replace the `cockroach.log` placeholder in the URL with the filename. \ No newline at end of file diff --git a/src/current/_includes/v22.1/ui/ui-metrics-navigation.md b/src/current/_includes/v22.1/ui/ui-metrics-navigation.md deleted file mode 100644 index adeca2e0b51..00000000000 --- a/src/current/_includes/v22.1/ui/ui-metrics-navigation.md +++ /dev/null @@ -1,10 +0,0 @@ -## Dashboard navigation - -Use the **Graph** menu to display metrics for your entire cluster or for a specific node: - -- When set to **Graph: Cluster**, data is aggregated together for all nodes in your cluster. -- When set to **Graph: {node}**, only data for the specific selected node is shown. - -To the right of the Graph and Dashboard menus, a time interval selector allows you to filter the view for a predefined or custom time interval. Use the navigation buttons to move to the previous, next, or current time interval. When you select a time interval, the same interval is selected in the [SQL Activity](ui-overview.html#sql-activity) pages. However, if you select 10 or 30 minutes, the interval defaults to 1 hour in SQL Activity pages. - -When viewing graphs, a tooltip will appear at your mouse cursor providing further insight into the data under the mouse cursor. Click anywhere within the graph to pin the tooltip in place, decoupling the tooltip from your mouse movements. Click anywhere within the graph to cause the tooltip to follow your mouse once more. diff --git a/src/current/_includes/v22.1/ui/ui-sql-latency-99th-percentile.md b/src/current/_includes/v22.1/ui/ui-sql-latency-99th-percentile.md deleted file mode 100644 index 9b10305966d..00000000000 --- a/src/current/_includes/v22.1/ui/ui-sql-latency-99th-percentile.md +++ /dev/null @@ -1,5 +0,0 @@ -Service latency is calculated as the time in nanoseconds between when the cluster [receives a query and finishes executing the query](architecture/sql-layer.html). This time does not include returning results to the client. Service latency includes metrics only from DML (`SELECT`,` INSERT`, `UPDATE`, and `DELETE`) statements. - -- In the node view, the graph shows the 99th [percentile](https://en.wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of service latency for the node. Over the last minute this node executed 99% of queries within this time, not including network latency between the node and the client. - -- In the cluster view, the graph shows the 99th [percentile](https://en.wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of service latency across all nodes in the cluster. There are lines for each node in the cluster. Over the last minute the node executed 99% of queries within this time, not including network latency between the node and the client. \ No newline at end of file diff --git a/src/current/_includes/v22.1/ui/ui-summary-events.md b/src/current/_includes/v22.1/ui/ui-summary-events.md deleted file mode 100644 index bd2848e8dfe..00000000000 --- a/src/current/_includes/v22.1/ui/ui-summary-events.md +++ /dev/null @@ -1,41 +0,0 @@ -## Summary and events - -### Summary panel - -A **Summary** panel of key metrics is displayed to the right of the timeseries graphs. - -Metric | Description ---------|---- -Total Nodes | The total number of nodes in the cluster. [Decommissioned nodes](node-shutdown.html?filters=decommission) are not included in this count. -Capacity Used | The storage capacity used as a percentage of [usable capacity](ui-cluster-overview-page.html#capacity-metrics) allocated across all nodes. -Unavailable Ranges | The number of unavailable ranges in the cluster. A non-zero number indicates an unstable cluster. -Queries per second | The total number of `SELECT`, `UPDATE`, `INSERT`, and `DELETE` queries executed per second across the cluster. -P99 Latency | The 99th percentile of service latency. - -{{site.data.alerts.callout_info}} -{% include {{ page.version.version }}/misc/available-capacity-metric.md %} -{{site.data.alerts.end}} - -### Events panel - -Underneath the [Summary](#summary-panel) panel, the **Events** panel lists the 5 most recent events logged for all nodes across the cluster. To list all events, click **View all events**. - -DB Console Events - -The following types of events are listed: - -- Database created -- Database dropped -- Table created -- Table dropped -- Table altered -- Index created -- Index dropped -- View created -- View dropped -- Schema change reversed -- Schema change finished -- Node joined -- Node decommissioned -- Node restarted -- Cluster setting changed \ No newline at end of file diff --git a/src/current/_includes/v22.1/userfile-examples/backup-userfile.md b/src/current/_includes/v22.1/userfile-examples/backup-userfile.md deleted file mode 100644 index 08bcef93e52..00000000000 --- a/src/current/_includes/v22.1/userfile-examples/backup-userfile.md +++ /dev/null @@ -1,35 +0,0 @@ -We recommend starting backups from a time at least 10 seconds in the past using [`AS OF SYSTEM TIME`](as-of-system-time.html). Read our guidance in the [Performance](backup.html#performance) section on the [`BACKUP`](backup.html) page. - -{{site.data.alerts.callout_info}} -Only database and table-level backups are possible when using `userfile` as storage. Restoring cluster-level backups will not work because `userfile` data is stored in the `defaultdb` database, and you cannot restore a cluster with existing table data. -{{site.data.alerts.end}} - -When working on the same cluster, `userfile` storage allows for database and table-level backups. - -First, run the following statement to backup a database to a directory in the default `userfile` space: - -~~~sql -BACKUP DATABASE bank INTO 'userfile://defaultdb.public.userfiles_$user/bank-backup' AS OF SYSTEM TIME '-10s'; -~~~ - -This directory will hold the files that make up a backup; including the manifest file and data files. - -{{site.data.alerts.callout_info}} -When backing up from a cluster and restoring a database or table that is stored in your `userfile` space to a different cluster, you can run [`cockroach userfile get`](cockroach-userfile-get.html) to download the backup files to a local machine and [`cockroach userfile upload -r --url {CONNECTION STRING}`](cockroach-userfile-upload.html#upload-a-directory-recursively) to upload to the `userfile` of the restoring cluster. -{{site.data.alerts.end}} - -In cases when your database needs to be restored, run the following: - -~~~sql -RESTORE DATABASE bank FROM LATEST IN 'userfile://defaultdb.public.userfiles_$user/bank-backup'; -~~~ - -It is also possible to run `userfile:///bank-backup` as `userfile:///` refers to the default path `userfile://defaultdb.public.userfiles_$user/`. - -Once the backup data is no longer needed, delete from the `userfile` storage with the following command: - -~~~shell -cockroach userfile delete bank-backup --url {CONNECTION STRING} -~~~ - -If you use `cockroach userfile delete {file}`, it will take as long as the [garbage collection](configure-replication-zones.html#gc-ttlseconds) to be removed from disk. diff --git a/src/current/_includes/v22.1/userfile-examples/freetier-userfile-note.md b/src/current/_includes/v22.1/userfile-examples/freetier-userfile-note.md deleted file mode 100644 index a30dc0e4429..00000000000 --- a/src/current/_includes/v22.1/userfile-examples/freetier-userfile-note.md +++ /dev/null @@ -1,5 +0,0 @@ -{{site.data.alerts.callout_info}} -It is possible to [backup](backup.html), [restore](restore.html), [import](import-into.html), and run [core changefeeds](changefeed-for.html) in [CockroachDB {{ site.data.products.serverless }}](../cockroachcloud/quickstart.html) that have upgraded to v21.1 or later. - -[`userfile`](use-userfile-for-bulk-operations.html) storage is available in CockroachDB {{ site.data.products.serverless }} clusters for backups, restores, and imports. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/userfile-examples/import-into-userfile.md b/src/current/_includes/v22.1/userfile-examples/import-into-userfile.md deleted file mode 100644 index 55e89f2fdf4..00000000000 --- a/src/current/_includes/v22.1/userfile-examples/import-into-userfile.md +++ /dev/null @@ -1,31 +0,0 @@ -To import from `userfile`, first create the table that you would like to import into: - -{% include_cached copy-clipboard.html %} -~~~sql -CREATE TABLE customers ( - id INT, - dob DATE, - first_name STRING, - last_name STRING, - joined DATE -); -~~~ - -Then, use `IMPORT INTO` to import data into the table: - -{% include_cached copy-clipboard.html %} -~~~sql -IMPORT INTO customers (id, dob, first_name, last_name, joined) - CSV DATA ('userfile:///test-data.csv'); -~~~ - -`userfile:///` references the default path (`userfile://defaultdb.public.userfiles_$user/`). - -~~~ - job_id | status | fraction_completed | rows | index_entries | bytes ----------------------+-----------+--------------------+--------+---------------+----------- - 599865027685613569 | succeeded | 1 | 300024 | 0 | 13389972 -(1 row) -~~~ - -For more import options, see [`IMPORT INTO`](import-into.html). diff --git a/src/current/_includes/v22.1/zone-configs/constrain-leaseholders-to-specific-datacenters.md b/src/current/_includes/v22.1/zone-configs/constrain-leaseholders-to-specific-datacenters.md deleted file mode 100644 index 8a2a9ad1b35..00000000000 --- a/src/current/_includes/v22.1/zone-configs/constrain-leaseholders-to-specific-datacenters.md +++ /dev/null @@ -1,32 +0,0 @@ -In addition to [constraining replicas to specific availability zones](configure-replication-zones.html#per-replica-constraints-to-specific-availability-zones), you may also specify preferences for where the range's leaseholders should be placed. This can result in increased performance in some scenarios. - -The [`ALTER TABLE ... CONFIGURE ZONE`](configure-zone.html) statement below requires that the cluster try to place the ranges' leaseholders in zone `us-east1`; if that is not possible, it will try to place them in zone `us-west1`. - -For more information about how the `lease_preferences` field works, see its description in the [Replication zone variables](configure-replication-zones.html#replication-zone-variables) section. - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE users CONFIGURE ZONE USING num_replicas = 3, constraints = '{"+region=us-east1": 1, "+region=us-west1": 1}', lease_preferences = '[[+region=us-east1], [+region=us-west1]]'; -~~~ - -~~~ -CONFIGURE ZONE 1 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATION FROM TABLE users; -~~~ - -~~~ - target | raw_config_sql -+-------------+--------------------------------------------------------------------+ - TABLE users | ALTER TABLE users CONFIGURE ZONE USING - | range_min_bytes = 134217728, - | range_max_bytes = 536870912, - | gc.ttlseconds = 100000, - | num_replicas = 3, - | constraints = '{+region=us-east1: 1, +region=us-west1: 1}', - | lease_preferences = '[[+region=us-east1], [+region=us-west1]]' -(1 row) -~~~ diff --git a/src/current/_includes/v22.1/zone-configs/create-a-replication-zone-for-a-database.md b/src/current/_includes/v22.1/zone-configs/create-a-replication-zone-for-a-database.md deleted file mode 100644 index 3bbaab20aff..00000000000 --- a/src/current/_includes/v22.1/zone-configs/create-a-replication-zone-for-a-database.md +++ /dev/null @@ -1,28 +0,0 @@ -To control replication for a specific database, use the `ALTER DATABASE ... CONFIGURE ZONE` statement to define the relevant values (other values will be inherited from the parent zone): - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER DATABASE movr CONFIGURE ZONE USING num_replicas = 5, gc.ttlseconds = 100000; -~~~ - -~~~ -CONFIGURE ZONE 1 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATION FROM DATABASE movr; -~~~ - -~~~ - target | raw_config_sql -----------------+------------------------------------------- - DATABASE movr | ALTER DATABASE movr CONFIGURE ZONE USING - | range_min_bytes = 134217728, - | range_max_bytes = 536870912, - | gc.ttlseconds = 100000, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' -(1 row) -~~~ diff --git a/src/current/_includes/v22.1/zone-configs/create-a-replication-zone-for-a-secondary-index.md b/src/current/_includes/v22.1/zone-configs/create-a-replication-zone-for-a-secondary-index.md deleted file mode 100644 index 29a8eec64b5..00000000000 --- a/src/current/_includes/v22.1/zone-configs/create-a-replication-zone-for-a-secondary-index.md +++ /dev/null @@ -1,40 +0,0 @@ -{{site.data.alerts.callout_success}} -The [Cost-based Optimizer](cost-based-optimizer.html) can take advantage of replication zones for secondary indexes when optimizing queries. -{{site.data.alerts.end}} - -{% include enterprise-feature.md %} - -The [secondary indexes](indexes.html) on a table will automatically use the replication zone for the table. However, with an enterprise license, you can add distinct replication zones for secondary indexes. - -To control replication for a specific secondary index, use the `ALTER INDEX ... CONFIGURE ZONE` statement to define the relevant values (other values will be inherited from the parent zone). - -{{site.data.alerts.callout_success}} -To get the name of a secondary index, which you need for the `CONFIGURE ZONE` statement, use the [`SHOW INDEX`](show-index.html) or [`SHOW CREATE TABLE`](show-create.html) statements. -{{site.data.alerts.end}} - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER INDEX vehicles@vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING num_replicas = 5, gc.ttlseconds = 100000; -~~~ - -~~~ -CONFIGURE ZONE 1 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATION FROM INDEX vehicles@vehicles_auto_index_fk_city_ref_users; -~~~ - -~~~ - target | raw_config_sql -+------------------------------------------------------+---------------------------------------------------------------------------------+ - INDEX vehicles@vehicles_auto_index_fk_city_ref_users | ALTER INDEX vehicles@vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING - | range_min_bytes = 134217728, - | range_max_bytes = 536870912, - | gc.ttlseconds = 100000, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' -(1 row) -~~~ diff --git a/src/current/_includes/v22.1/zone-configs/create-a-replication-zone-for-a-system-range.md b/src/current/_includes/v22.1/zone-configs/create-a-replication-zone-for-a-system-range.md deleted file mode 100644 index 84aa90c17f5..00000000000 --- a/src/current/_includes/v22.1/zone-configs/create-a-replication-zone-for-a-system-range.md +++ /dev/null @@ -1,41 +0,0 @@ -In addition to the databases and tables that are visible via the SQL interface, CockroachDB stores internal data in what are called system ranges. CockroachDB comes with pre-configured replication zones for some of these ranges: - -Target Name | Description -----------|----------------------------- -`meta` | The "meta" ranges contain the authoritative information about the location of all data in the cluster.

These ranges must retain a majority of replicas for the cluster as a whole to remain available and historical queries are never run on them, so CockroachDB comes with a **pre-configured** `meta` replication zone with `num_replicas` set to 5 to make these ranges more resilient to node failure and a lower-than-default `gc.ttlseconds` to keep these ranges smaller for reliable performance.

If your cluster is running in multiple datacenters, it's a best practice to configure the meta ranges to have a copy in each datacenter. -`liveness` | The "liveness" range contains the authoritative information about which nodes are live at any given time.

These ranges must retain a majority of replicas for the cluster as a whole to remain available and historical queries are never run on them, so CockroachDB comes with a **pre-configured** `liveness` replication zone with `num_replicas` set to 5 to make these ranges more resilient to node failure and a lower-than-default `gc.ttlseconds` to keep these ranges smaller for reliable performance. -`system` | There are system ranges for a variety of other important internal data, including information needed to allocate new table IDs and track the status of a cluster's nodes.

These ranges must retain a majority of replicas for the cluster as a whole to remain available, so CockroachDB comes with a **pre-configured** `system` replication zone with `num_replicas` set to 5 to make these ranges more resilient to node failure. -`timeseries` | The "timeseries" ranges contain monitoring data about the cluster that powers the graphs in CockroachDB's DB Console. If necessary, you can add a `timeseries` replication zone to control the replication of this data. - -{{site.data.alerts.callout_danger}} -Use caution when editing replication zones for system ranges, as they could cause some (or all) parts of your cluster to stop working. -{{site.data.alerts.end}} - -To control replication for one of the above sets of system ranges, use the [`ALTER RANGE ... CONFIGURE ZONE`](configure-zone.html) statement to define the relevant values (other values will be inherited from the parent zone): - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER RANGE meta CONFIGURE ZONE USING num_replicas = 7; -~~~ - -~~~ -CONFIGURE ZONE 1 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATION FROM RANGE meta; -~~~ - -~~~ - target | raw_config_sql -+------------+---------------------------------------+ - RANGE meta | ALTER RANGE meta CONFIGURE ZONE USING - | range_min_bytes = 134217728, - | range_max_bytes = 536870912, - | gc.ttlseconds = 3600, - | num_replicas = 7, - | constraints = '[]', - | lease_preferences = '[]' -(1 row) -~~~ diff --git a/src/current/_includes/v22.1/zone-configs/create-a-replication-zone-for-a-table-partition.md b/src/current/_includes/v22.1/zone-configs/create-a-replication-zone-for-a-table-partition.md deleted file mode 100644 index 775ecc7028c..00000000000 --- a/src/current/_includes/v22.1/zone-configs/create-a-replication-zone-for-a-table-partition.md +++ /dev/null @@ -1,63 +0,0 @@ -{% unless include.hide-enterprise-warning == "true" %} -{% include enterprise-feature.md %} -{% endunless %} - -Once [partitions have been defined for a table or a secondary index](partition-by.html), to control replication for a partition, use `ALTER PARTITION OF INDEX CONFIGURE ZONE`: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER PARTITION us_west OF INDEX vehicles@primary - CONFIGURE ZONE USING - num_replicas = 5, - constraints = '[+region=us-west1]'; -~~~ - -~~~ -CONFIGURE ZONE 1 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER PARTITION us_west OF INDEX vehicles@vehicles_auto_index_fk_city_ref_users - CONFIGURE ZONE USING - num_replicas = 5, - constraints = '[+region=us-west1]'; -~~~ - -~~~ -CONFIGURE ZONE 1 -~~~ - -To define replication zones for identically named partitions of a table and its secondary indexes, you can use the `@*` syntax to save several steps: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER PARTITION us_west OF INDEX vehicles@* - CONFIGURE ZONE USING - num_replicas = 5, - constraints = '[+region=us-west1]'; -~~~ - -To view the zone configuration for a partition, use `SHOW ZONE CONFIGURATION FROM PARTITION OF INDEX `: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATION FROM PARTITION us_west OF INDEX vehicles@primary; -~~~ - -~~~ - target | raw_config_sql -----------------------------------------------+------------------------------------------------------------------------- - PARTITION us_west OF INDEX vehicles@primary | ALTER PARTITION us_west OF INDEX vehicles@primary CONFIGURE ZONE USING - | range_min_bytes = 134217728, - | range_max_bytes = 536870912, - | gc.ttlseconds = 90000, - | num_replicas = 5, - | constraints = '[+region=us-west1]', - | lease_preferences = '[]' -(1 row) -~~~ - -{{site.data.alerts.callout_success}} -You can also use the [`SHOW CREATE TABLE`](show-create.html) statement or [`SHOW PARTITIONS`](show-partitions.html) statements to view details about all of the replication zones defined for the partitions of a table and its secondary indexes. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/zone-configs/create-a-replication-zone-for-a-table.md b/src/current/_includes/v22.1/zone-configs/create-a-replication-zone-for-a-table.md deleted file mode 100644 index 10ef4343bbf..00000000000 --- a/src/current/_includes/v22.1/zone-configs/create-a-replication-zone-for-a-table.md +++ /dev/null @@ -1,28 +0,0 @@ -To control replication for a specific table, use the `ALTER TABLE ... CONFIGURE ZONE` statement to define the relevant values (other values will be inherited from the parent zone): - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE users CONFIGURE ZONE USING num_replicas = 5, gc.ttlseconds = 100000; -~~~ - -~~~ -CONFIGURE ZONE 1 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATION FROM TABLE users; -~~~ - -~~~ - target | raw_config_sql ---------------+----------------------------------------- - TABLE users | ALTER TABLE users CONFIGURE ZONE USING - | range_min_bytes = 134217728, - | range_max_bytes = 536870912, - | gc.ttlseconds = 100000, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' -(1 row) -~~~ diff --git a/src/current/_includes/v22.1/zone-configs/edit-the-default-replication-zone.md b/src/current/_includes/v22.1/zone-configs/edit-the-default-replication-zone.md deleted file mode 100644 index 9b0ad319dc6..00000000000 --- a/src/current/_includes/v22.1/zone-configs/edit-the-default-replication-zone.md +++ /dev/null @@ -1,28 +0,0 @@ -To edit the default replication zone, use the `ALTER RANGE ... CONFIGURE ZONE` statement to define the values you want to change (other values will remain the same): - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER RANGE default CONFIGURE ZONE USING num_replicas = 5, gc.ttlseconds = 100000; -~~~ - -~~~ -CONFIGURE ZONE 1 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATION FROM RANGE default; -~~~ - -~~~ - target | raw_config_sql -+---------------+------------------------------------------+ - RANGE default | ALTER RANGE default CONFIGURE ZONE USING - | range_min_bytes = 134217728, - | range_max_bytes = 536870912, - | gc.ttlseconds = 100000, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' -(1 row) -~~~ diff --git a/src/current/_includes/v22.1/zone-configs/remove-a-replication-zone.md b/src/current/_includes/v22.1/zone-configs/remove-a-replication-zone.md deleted file mode 100644 index 610fc846b32..00000000000 --- a/src/current/_includes/v22.1/zone-configs/remove-a-replication-zone.md +++ /dev/null @@ -1,12 +0,0 @@ -{{site.data.alerts.callout_info}} -You cannot `DISCARD` any zone configurations on multi-region tables, indexes, or partitions if the [multi-region abstractions](migrate-to-multiregion-sql.html#replication-zone-patterns-and-multi-region-sql-abstractions) created the zone configuration. -{{site.data.alerts.end}} - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE t CONFIGURE ZONE DISCARD; -~~~ - -~~~ -CONFIGURE ZONE 1 -~~~ diff --git a/src/current/_includes/v22.1/zone-configs/reset-a-replication-zone.md b/src/current/_includes/v22.1/zone-configs/reset-a-replication-zone.md deleted file mode 100644 index 7be91ffbd3d..00000000000 --- a/src/current/_includes/v22.1/zone-configs/reset-a-replication-zone.md +++ /dev/null @@ -1,8 +0,0 @@ -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE t CONFIGURE ZONE USING DEFAULT; -~~~ - -~~~ -CONFIGURE ZONE 1 -~~~ diff --git a/src/current/_includes/v22.1/zone-configs/variables.md b/src/current/_includes/v22.1/zone-configs/variables.md deleted file mode 100644 index c80cf0921dc..00000000000 --- a/src/current/_includes/v22.1/zone-configs/variables.md +++ /dev/null @@ -1,17 +0,0 @@ -Variable | Description -------|------------ -`range_min_bytes` | The minimum size, in bytes, for a range of data in the zone. When a range is less than this size, CockroachDB will merge it with an adjacent range.

**Default:** `134217728` (128 MiB) -`range_max_bytes` | The maximum size, in bytes, for a range of data in the zone. When a range reaches this size, CockroachDB will split it into two ranges.

**Default:** `536870912` (512 MiB) -`gc.ttlseconds` | The number of seconds overwritten values will be retained before garbage collection. Smaller values can save disk space if values are frequently overwritten; larger values increase the range allowed for `AS OF SYSTEM TIME` queries, also know as [Time Travel Queries](select-clause.html#select-historical-data-time-travel).

It is not recommended to set this below `600` (10 minutes); doing so will cause problems for long-running queries. Also, since all versions of a row are stored in a single range that never splits, it is not recommended to set this so high that all the changes to a row in that time period could add up to more than 512 MiB; such oversized ranges could contribute to the server running out of memory or other problems. {{site.data.alerts.callout_info}} Ensure that you set `gc.ttlseconds` long enough to accommodate your [backup schedule](create-schedule-for-backup.html), otherwise your incremental backups will fail with [this error](common-errors.html#protected-ts-verification-error). For example, if you set up your backup schedule to recur daily, but you set `gc.ttlseconds` to less than one day, all your incremental backups will fail.{{site.data.alerts.end}} **Default:** `90000` (25 hours) -`num_replicas` | The number of replicas in the zone, also called the "replication factor".

**Default:** `3`

For the `system` database and `.meta`, `.liveness`, and `.system` ranges, the default value is `5`.

For [multi-region databases configured to survive region failures](multiregion-overview.html#surviving-region-failures), the default value is `5`; this will include both [voting](#num_voters) and [non-voting replicas](architecture/replication-layer.html#non-voting-replicas). -`constraints` | An array of required (`+`) and/or prohibited (`-`) constraints influencing the location of replicas. See [Types of Constraints](configure-replication-zones.html#types-of-constraints) and [Scope of Constraints](configure-replication-zones.html#scope-of-constraints) for more details.

To prevent hard-to-detect typos, constraints placed on [store attributes and node localities](configure-replication-zones.html#descriptive-attributes-assigned-to-nodes) must match the values passed to at least one node in the cluster. If not, an error is signalled. To prevent this error, make sure at least one active node is configured to match the constraint. For example, apply `constraints = '[+region=west]'` only if you had set `--locality=region=west` for at least one node while starting the cluster.

**Default:** No constraints, with CockroachDB locating each replica on a unique node and attempting to spread replicas evenly across localities. -`lease_preferences` | An ordered list of required and/or prohibited constraints influencing the location of [leaseholders](architecture/glossary.html#architecture-leaseholder). Whether each constraint is required or prohibited is expressed with a leading `+` or `-`, respectively. Note that lease preference constraints do not have to be shared with the `constraints` field. For example, it's valid for your configuration to define a `lease_preferences` field that does not reference any values from the `constraints` field. It's also valid to define a `lease_preferences` field with no `constraints` field at all.

If the first preference cannot be satisfied, CockroachDB will attempt to satisfy the second preference, and so on. If none of the preferences can be met, the lease will be placed using the default lease placement algorithm, which is to base lease placement decisions on how many leases each node already has, trying to make all the nodes have around the same amount.

Each value in the list can include multiple constraints. For example, the list `[[+zone=us-east-1b, +ssd], [+zone=us-east-1a], [+zone=us-east-1c, +ssd]]` means "prefer nodes with an SSD in `us-east-1b`, then any nodes in `us-east-1a`, then nodes in `us-east-1c` with an SSD."

For a usage example, see [Constrain leaseholders to specific availability zones](configure-replication-zones.html#constrain-leaseholders-to-specific-availability-zones).

**Default**: No lease location preferences are applied if this field is not specified. -`global_reads` | If `true`, transactions operating on the range(s) affected by this zone config should be [non-blocking](architecture/transaction-layer.html#non-blocking-transactions), which slows down writes but allows reads from any replica in the range. Most users will not need to modify this setting; it is applied automatically when you [use the `GLOBAL` table locality in a multi-region cluster](global-tables.html). -`num_voters` | Specifies the number of [voting replicas](architecture/life-of-a-distributed-transaction.html#consensus). When set, `num_replicas` will be the sum of voting and [non-voting replicas](architecture/replication-layer.html#non-voting-replicas). Most users will not need to modify this setting; it is part of the underlying machinery that enables [improved multi-region capabilities in v21.1 and above](multiregion-overview.html). -`voter_constraints` | Specifies the constraints that govern the placement of voting replicas. This differs from the `constraints` field, which will govern the placement of all voting and non-voting replicas. Most users will not need to modify this setting; it is part of the underlying machinery that enables [improved multi-region capabilities in v21.1 and above](multiregion-overview.html). - -{{site.data.alerts.callout_info}} -If a value is not set, new zone configurations will inherit their values from their parent zone (e.g., a partition zone inherits from the table zone), which is not necessarily `default`. - -If a variable is set to `COPY FROM PARENT` (e.g., `range_max_bytes = COPY FROM PARENT`), the variable will copy its value from its parent [replication zone](configure-replication-zones.html). The `COPY FROM PARENT` value is a convenient shortcut to use so you do not have to look up the parent's current value. For example, the `range_max_bytes` and `range_min_bytes` variables must be set together, so when editing one value, you can use `COPY FROM PARENT` for the other. Note that if the variable in the parent replication zone is changed after the child replication zone is copied, the change will not be reflected in the child zone. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v22.1/zone-configs/view-all-replication-zones.md b/src/current/_includes/v22.1/zone-configs/view-all-replication-zones.md deleted file mode 100644 index 6704047dac9..00000000000 --- a/src/current/_includes/v22.1/zone-configs/view-all-replication-zones.md +++ /dev/null @@ -1,113 +0,0 @@ -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW ALL ZONE CONFIGURATIONS; -~~~ - -~~~ - target | raw_config_sql --------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------- - RANGE default | ALTER RANGE default CONFIGURE ZONE USING - | range_min_bytes = 134217728, - | range_max_bytes = 536870912, - | gc.ttlseconds = 90000, - | num_replicas = 3, - | constraints = '[]', - | lease_preferences = '[]' - DATABASE system | ALTER DATABASE system CONFIGURE ZONE USING - | range_min_bytes = 134217728, - | range_max_bytes = 536870912, - | gc.ttlseconds = 90000, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' - RANGE meta | ALTER RANGE meta CONFIGURE ZONE USING - | range_min_bytes = 134217728, - | range_max_bytes = 536870912, - | gc.ttlseconds = 3600, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' - RANGE system | ALTER RANGE system CONFIGURE ZONE USING - | range_min_bytes = 134217728, - | range_max_bytes = 536870912, - | gc.ttlseconds = 90000, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' - RANGE liveness | ALTER RANGE liveness CONFIGURE ZONE USING - | range_min_bytes = 134217728, - | range_max_bytes = 536870912, - | gc.ttlseconds = 600, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' - TABLE system.public.replication_constraint_stats | ALTER TABLE system.public.replication_constraint_stats CONFIGURE ZONE USING - | gc.ttlseconds = 600, - | constraints = '[]', - | lease_preferences = '[]' - TABLE system.public.replication_stats | ALTER TABLE system.public.replication_stats CONFIGURE ZONE USING - | gc.ttlseconds = 600, - | constraints = '[]', - | lease_preferences = '[]' - PARTITION us_west OF INDEX movr.public.users@primary | ALTER PARTITION us_west OF INDEX movr.public.users@primary CONFIGURE ZONE USING - | constraints = '[+region=us-west1]' - PARTITION us_east OF INDEX movr.public.users@primary | ALTER PARTITION us_east OF INDEX movr.public.users@primary CONFIGURE ZONE USING - | constraints = '[+region=us-east1]' - PARTITION europe_west OF INDEX movr.public.users@primary | ALTER PARTITION europe_west OF INDEX movr.public.users@primary CONFIGURE ZONE USING - | constraints = '[+region=europe-west1]' - PARTITION us_west OF INDEX movr.public.vehicles@primary | ALTER PARTITION us_west OF INDEX movr.public.vehicles@primary CONFIGURE ZONE USING - | constraints = '[+region=us-west1]' - PARTITION us_west OF INDEX movr.public.vehicles@vehicles_auto_index_fk_city_ref_users | ALTER PARTITION us_west OF INDEX movr.public.vehicles@vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING - | constraints = '[+region=us-west1]' - PARTITION us_east OF INDEX movr.public.vehicles@primary | ALTER PARTITION us_east OF INDEX movr.public.vehicles@primary CONFIGURE ZONE USING - | constraints = '[+region=us-east1]' - PARTITION us_east OF INDEX movr.public.vehicles@vehicles_auto_index_fk_city_ref_users | ALTER PARTITION us_east OF INDEX movr.public.vehicles@vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING - | constraints = '[+region=us-east1]' - PARTITION europe_west OF INDEX movr.public.vehicles@primary | ALTER PARTITION europe_west OF INDEX movr.public.vehicles@primary CONFIGURE ZONE USING - | constraints = '[+region=europe-west1]' - PARTITION europe_west OF INDEX movr.public.vehicles@vehicles_auto_index_fk_city_ref_users | ALTER PARTITION europe_west OF INDEX movr.public.vehicles@vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING - | constraints = '[+region=europe-west1]' - PARTITION us_west OF INDEX movr.public.rides@primary | ALTER PARTITION us_west OF INDEX movr.public.rides@primary CONFIGURE ZONE USING - | constraints = '[+region=us-west1]' - PARTITION us_west OF INDEX movr.public.rides@rides_auto_index_fk_city_ref_users | ALTER PARTITION us_west OF INDEX movr.public.rides@rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING - | constraints = '[+region=us-west1]' - PARTITION us_west OF INDEX movr.public.rides@rides_auto_index_fk_vehicle_city_ref_vehicles | ALTER PARTITION us_west OF INDEX movr.public.rides@rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING - | constraints = '[+region=us-west1]' - PARTITION us_east OF INDEX movr.public.rides@primary | ALTER PARTITION us_east OF INDEX movr.public.rides@primary CONFIGURE ZONE USING - | constraints = '[+region=us-east1]' - PARTITION us_east OF INDEX movr.public.rides@rides_auto_index_fk_city_ref_users | ALTER PARTITION us_east OF INDEX movr.public.rides@rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING - | constraints = '[+region=us-east1]' - PARTITION us_east OF INDEX movr.public.rides@rides_auto_index_fk_vehicle_city_ref_vehicles | ALTER PARTITION us_east OF INDEX movr.public.rides@rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING - | constraints = '[+region=us-east1]' - PARTITION europe_west OF INDEX movr.public.rides@primary | ALTER PARTITION europe_west OF INDEX movr.public.rides@primary CONFIGURE ZONE USING - | constraints = '[+region=europe-west1]' - PARTITION europe_west OF INDEX movr.public.rides@rides_auto_index_fk_city_ref_users | ALTER PARTITION europe_west OF INDEX movr.public.rides@rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING - | constraints = '[+region=europe-west1]' - PARTITION europe_west OF INDEX movr.public.rides@rides_auto_index_fk_vehicle_city_ref_vehicles | ALTER PARTITION europe_west OF INDEX movr.public.rides@rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING - | constraints = '[+region=europe-west1]' - PARTITION us_west OF INDEX movr.public.vehicle_location_histories@primary | ALTER PARTITION us_west OF INDEX movr.public.vehicle_location_histories@primary CONFIGURE ZONE USING - | constraints = '[+region=us-west1]' - PARTITION us_east OF INDEX movr.public.vehicle_location_histories@primary | ALTER PARTITION us_east OF INDEX movr.public.vehicle_location_histories@primary CONFIGURE ZONE USING - | constraints = '[+region=us-east1]' - PARTITION europe_west OF INDEX movr.public.vehicle_location_histories@primary | ALTER PARTITION europe_west OF INDEX movr.public.vehicle_location_histories@primary CONFIGURE ZONE USING - | constraints = '[+region=europe-west1]' - TABLE movr.public.promo_codes | ALTER TABLE movr.public.promo_codes CONFIGURE ZONE USING - | num_replicas = 3, - | constraints = '{+region=us-east1: 1}', - | lease_preferences = '[[+region=us-east1]]' - INDEX movr.public.promo_codes@promo_codes_idx_us_west | ALTER INDEX movr.public.promo_codes@promo_codes_idx_us_west CONFIGURE ZONE USING - | num_replicas = 3, - | constraints = '{+region=us-west1: 1}', - | lease_preferences = '[[+region=us-west1]]' - INDEX movr.public.promo_codes@promo_codes_idx_europe_west | ALTER INDEX movr.public.promo_codes@promo_codes_idx_europe_west CONFIGURE ZONE USING - | num_replicas = 3, - | constraints = '{+region=europe-west1: 1}', - | lease_preferences = '[[+region=europe-west1]]' - PARTITION us_west OF INDEX movr.public.user_promo_codes@primary | ALTER PARTITION us_west OF INDEX movr.public.user_promo_codes@primary CONFIGURE ZONE USING - | constraints = '[+region=us-west1]' - PARTITION us_east OF INDEX movr.public.user_promo_codes@primary | ALTER PARTITION us_east OF INDEX movr.public.user_promo_codes@primary CONFIGURE ZONE USING - | constraints = '[+region=us-east1]' - PARTITION europe_west OF INDEX movr.public.user_promo_codes@primary | ALTER PARTITION europe_west OF INDEX movr.public.user_promo_codes@primary CONFIGURE ZONE USING - | constraints = '[+region=europe-west1]' -(34 rows) -~~~ diff --git a/src/current/_includes/v22.1/zone-configs/view-the-default-replication-zone.md b/src/current/_includes/v22.1/zone-configs/view-the-default-replication-zone.md deleted file mode 100644 index cfc8d7b4d80..00000000000 --- a/src/current/_includes/v22.1/zone-configs/view-the-default-replication-zone.md +++ /dev/null @@ -1,17 +0,0 @@ -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATION FROM RANGE default; -~~~ - -~~~ - target | raw_config_sql -----------------+------------------------------------------- - RANGE default | ALTER RANGE default CONFIGURE ZONE USING - | range_min_bytes = 134217728, - | range_max_bytes = 536870912, - | gc.ttlseconds = 90000, - | num_replicas = 3, - | constraints = '[]', - | lease_preferences = '[]' -(1 row) -~~~ diff --git a/src/current/_includes/v22.2/orchestration/kubernetes-upgrade-cluster-helm.md b/src/current/_includes/v22.2/orchestration/kubernetes-upgrade-cluster-helm.md index 88b4021014e..03aa3eecf40 100644 --- a/src/current/_includes/v22.2/orchestration/kubernetes-upgrade-cluster-helm.md +++ b/src/current/_includes/v22.2/orchestration/kubernetes-upgrade-cluster-helm.md @@ -6,7 +6,7 @@ Therefore, in order to upgrade to {{ page.version.version }}, you must be on a production release of {{ previous_version }}. - 1. If you are upgrading to {{ page.version.version }} from a production release earlier than {{ previous_version }}, or from a testing release (alpha/beta), first [upgrade to a production release of {{ previous_version }}]({% link {{ previous_version }}/upgrade-cockroachdb-kubernetes.md %}?filters=helm). Be sure to complete all the steps. + 1. If you are upgrading to {{ page.version.version }} from a production release earlier than {{ previous_version }}, or from a testing release (alpha/beta), first upgrade to a production release of {{ previous_version }}. Be sure to complete all the steps. 1. Then return to this page and perform a second upgrade to {{ page.version.version }}. diff --git a/src/current/_includes/v22.2/orchestration/kubernetes-upgrade-cluster-manual.md b/src/current/_includes/v22.2/orchestration/kubernetes-upgrade-cluster-manual.md index 87fffdaf4d0..039371e4541 100644 --- a/src/current/_includes/v22.2/orchestration/kubernetes-upgrade-cluster-manual.md +++ b/src/current/_includes/v22.2/orchestration/kubernetes-upgrade-cluster-manual.md @@ -6,7 +6,7 @@ Therefore, in order to upgrade to {{ page.version.version }}, you must be on a production release of {{ previous_version }}. - 1. If you are upgrading to {{ page.version.version }} from a production release earlier than {{ previous_version }}, or from a testing release (alpha/beta), first [upgrade to a production release of {{ previous_version }}]({% link {{ previous_version }}/upgrade-cockroachdb-kubernetes.md %}?filters=manual). Be sure to complete all the steps. + 1. If you are upgrading to {{ page.version.version }} from a production release earlier than {{ previous_version }}, or from a testing release (alpha/beta), first upgrade to a production release of {{ previous_version }}. Be sure to complete all the steps. 1. Then return to this page and perform a second upgrade to {{ page.version.version }}. diff --git a/src/current/_includes/v23.1/orchestration/kubernetes-upgrade-cluster-helm.md b/src/current/_includes/v23.1/orchestration/kubernetes-upgrade-cluster-helm.md index 4eb07232e2f..a681c089d7a 100644 --- a/src/current/_includes/v23.1/orchestration/kubernetes-upgrade-cluster-helm.md +++ b/src/current/_includes/v23.1/orchestration/kubernetes-upgrade-cluster-helm.md @@ -6,7 +6,7 @@ Therefore, in order to upgrade to {{ page.version.version }}, you must be on a production release of {{ previous_version }}. - 1. If you are upgrading to {{ page.version.version }} from a production release earlier than {{ previous_version }}, or from a testing release (alpha/beta), first [upgrade to a production release of {{ previous_version }}]({% link {{ previous_version }}/upgrade-cockroachdb-kubernetes.md %}?filters=helm). Be sure to complete all the steps. + 1. If you are upgrading to {{ page.version.version }} from a production release earlier than {{ previous_version }}, or from a testing release (alpha/beta), first upgrade to a production release of {{ previous_version }}. Be sure to complete all the steps. 1. Then return to this page and perform a second upgrade to {{ page.version.version }}. diff --git a/src/current/_includes/v23.1/orchestration/kubernetes-upgrade-cluster-manual.md b/src/current/_includes/v23.1/orchestration/kubernetes-upgrade-cluster-manual.md index 0705a452274..b63a6a9a863 100644 --- a/src/current/_includes/v23.1/orchestration/kubernetes-upgrade-cluster-manual.md +++ b/src/current/_includes/v23.1/orchestration/kubernetes-upgrade-cluster-manual.md @@ -6,7 +6,7 @@ Therefore, in order to upgrade to {{ page.version.version }}, you must be on a production release of {{ previous_version }}. - 1. If you are upgrading to {{ page.version.version }} from a production release earlier than {{ previous_version }}, or from a testing release (alpha/beta), first [upgrade to a production release of {{ previous_version }}]({% link {{ previous_version }}/upgrade-cockroachdb-kubernetes.md %}?filters=manual). Be sure to complete all the steps. + 1. If you are upgrading to {{ page.version.version }} from a production release earlier than {{ previous_version }}, or from a testing release (alpha/beta), first upgrade to a production release of {{ previous_version }}. Be sure to complete all the steps. 1. Then return to this page and perform a second upgrade to {{ page.version.version }}. diff --git a/src/current/_plugins/sidebar_htmltest.rb b/src/current/_plugins/sidebar_htmltest.rb index 970926334d0..62017bd7728 100644 --- a/src/current/_plugins/sidebar_htmltest.rb +++ b/src/current/_plugins/sidebar_htmltest.rb @@ -1,16 +1,36 @@ require 'json' require 'liquid' +require 'yaml' module SidebarHTMLTest class Generator < Jekyll::Generator def generate(site) @site = site + # Read htmltest configuration to get ignored directories + htmltest_config = YAML.load_file('.htmltest.yml') rescue {} + ignored_dirs = htmltest_config['IgnoreDirs'] || [] + + # Extract version numbers from ignored directories + ignored_versions = ignored_dirs.map do |dir| + match = dir.match(/\^?docs\/?(v\d+\.\d+)/) + match[1] if match + end.compact + Dir[File.join(site.config['includes_dir'], 'sidebar-data-v*.json')].each do |f| next unless !!site.config['cockroachcloud'] == f.include?('cockroachcloud') + + # Extract version from filename + version = f.match(/sidebar-data-(v\d+\.\d+)/)[1] + + # Skip if this version is in the ignored list + if ignored_versions.include?(version) + Jekyll.logger.info "SidebarHTMLTest:", "Skipping ignored version #{version}" + next + end + partial = site.liquid_renderer.file(f).parse(File.read(f)) json = partial.render!(site.site_payload, {registers: {site: site}}) - version = f.match(/sidebar-data-(v\d+\.\d+)/)[1] render_sidebar(json, version) end end diff --git a/src/current/images/v22.1/ui_batches.png b/src/current/images/common/ui/ui_batches.png similarity index 100% rename from src/current/images/v22.1/ui_batches.png rename to src/current/images/common/ui/ui_batches.png diff --git a/src/current/images/v22.1/ui_consistencychecker_queue.png b/src/current/images/common/ui/ui_consistencychecker_queue.png similarity index 100% rename from src/current/images/v22.1/ui_consistencychecker_queue.png rename to src/current/images/common/ui/ui_consistencychecker_queue.png diff --git a/src/current/images/v22.1/ui_gc_queue.png b/src/current/images/common/ui/ui_gc_queue.png similarity index 100% rename from src/current/images/v22.1/ui_gc_queue.png rename to src/current/images/common/ui/ui_gc_queue.png diff --git a/src/current/images/v22.1/ui_kv_transactions.png b/src/current/images/common/ui/ui_kv_transactions.png similarity index 100% rename from src/current/images/v22.1/ui_kv_transactions.png rename to src/current/images/common/ui/ui_kv_transactions.png diff --git a/src/current/images/v22.1/ui_kv_transactions_90.png b/src/current/images/common/ui/ui_kv_transactions_90.png similarity index 100% rename from src/current/images/v22.1/ui_kv_transactions_90.png rename to src/current/images/common/ui/ui_kv_transactions_90.png diff --git a/src/current/images/v22.1/ui_kv_transactions_99.png b/src/current/images/common/ui/ui_kv_transactions_99.png similarity index 100% rename from src/current/images/v22.1/ui_kv_transactions_99.png rename to src/current/images/common/ui/ui_kv_transactions_99.png diff --git a/src/current/images/v22.1/ui_merge_queue.png b/src/current/images/common/ui/ui_merge_queue.png similarity index 100% rename from src/current/images/v22.1/ui_merge_queue.png rename to src/current/images/common/ui/ui_merge_queue.png diff --git a/src/current/images/v22.1/ui_node_heartbeat_90.png b/src/current/images/common/ui/ui_node_heartbeat_90.png similarity index 100% rename from src/current/images/v22.1/ui_node_heartbeat_90.png rename to src/current/images/common/ui/ui_node_heartbeat_90.png diff --git a/src/current/images/v22.1/ui_node_heartbeat_99.png b/src/current/images/common/ui/ui_node_heartbeat_99.png similarity index 100% rename from src/current/images/v22.1/ui_node_heartbeat_99.png rename to src/current/images/common/ui/ui_node_heartbeat_99.png diff --git a/src/current/images/v22.1/ui_queue_failures.png b/src/current/images/common/ui/ui_queue_failures.png similarity index 100% rename from src/current/images/v22.1/ui_queue_failures.png rename to src/current/images/common/ui/ui_queue_failures.png diff --git a/src/current/images/v22.1/ui_queue_time.png b/src/current/images/common/ui/ui_queue_time.png similarity index 100% rename from src/current/images/v22.1/ui_queue_time.png rename to src/current/images/common/ui/ui_queue_time.png diff --git a/src/current/images/v22.1/ui_raftlog_queue.png b/src/current/images/common/ui/ui_raftlog_queue.png similarity index 100% rename from src/current/images/v22.1/ui_raftlog_queue.png rename to src/current/images/common/ui/ui_raftlog_queue.png diff --git a/src/current/images/v22.1/ui_raftsnapshot_queue.png b/src/current/images/common/ui/ui_raftsnapshot_queue.png similarity index 100% rename from src/current/images/v22.1/ui_raftsnapshot_queue.png rename to src/current/images/common/ui/ui_raftsnapshot_queue.png diff --git a/src/current/images/v22.1/ui_replicagc_queue.png b/src/current/images/common/ui/ui_replicagc_queue.png similarity index 100% rename from src/current/images/v22.1/ui_replicagc_queue.png rename to src/current/images/common/ui/ui_replicagc_queue.png diff --git a/src/current/images/v22.1/ui_replication_queue.png b/src/current/images/common/ui/ui_replication_queue.png similarity index 100% rename from src/current/images/v22.1/ui_replication_queue.png rename to src/current/images/common/ui/ui_replication_queue.png diff --git a/src/current/images/v22.1/ui_rpc_errors.png b/src/current/images/common/ui/ui_rpc_errors.png similarity index 100% rename from src/current/images/v22.1/ui_rpc_errors.png rename to src/current/images/common/ui/ui_rpc_errors.png diff --git a/src/current/images/v22.1/ui_rpcs.png b/src/current/images/common/ui/ui_rpcs.png similarity index 100% rename from src/current/images/v22.1/ui_rpcs.png rename to src/current/images/common/ui/ui_rpcs.png diff --git a/src/current/images/v22.1/ui_slow_distsender.png b/src/current/images/common/ui/ui_slow_distsender.png similarity index 100% rename from src/current/images/v22.1/ui_slow_distsender.png rename to src/current/images/common/ui/ui_slow_distsender.png diff --git a/src/current/images/v22.1/ui_slow_latch.png b/src/current/images/common/ui/ui_slow_latch.png similarity index 100% rename from src/current/images/v22.1/ui_slow_latch.png rename to src/current/images/common/ui/ui_slow_latch.png diff --git a/src/current/images/v22.1/ui_slow_lease.png b/src/current/images/common/ui/ui_slow_lease.png similarity index 100% rename from src/current/images/v22.1/ui_slow_lease.png rename to src/current/images/common/ui/ui_slow_lease.png diff --git a/src/current/images/v22.1/ui_slow_raft.png b/src/current/images/common/ui/ui_slow_raft.png similarity index 100% rename from src/current/images/v22.1/ui_slow_raft.png rename to src/current/images/common/ui/ui_slow_raft.png diff --git a/src/current/images/v22.1/ui_split_queue.png b/src/current/images/common/ui/ui_split_queue.png similarity index 100% rename from src/current/images/v22.1/ui_split_queue.png rename to src/current/images/common/ui/ui_split_queue.png diff --git a/src/current/images/v22.1/ui_tsmaintenance_queue.png b/src/current/images/common/ui/ui_tsmaintenance_queue.png similarity index 100% rename from src/current/images/v22.1/ui_tsmaintenance_queue.png rename to src/current/images/common/ui/ui_tsmaintenance_queue.png diff --git a/src/current/images/v22.1/CockroachDB_Training_Wide.png b/src/current/images/v22.1/CockroachDB_Training_Wide.png deleted file mode 100644 index 0844c2b50e0..00000000000 Binary files a/src/current/images/v22.1/CockroachDB_Training_Wide.png and /dev/null differ diff --git a/src/current/images/v22.1/Parallel_Statement_Execution_Error_Mismatch.png b/src/current/images/v22.1/Parallel_Statement_Execution_Error_Mismatch.png deleted file mode 100644 index f60360c9598..00000000000 Binary files a/src/current/images/v22.1/Parallel_Statement_Execution_Error_Mismatch.png and /dev/null differ diff --git a/src/current/images/v22.1/Parallel_Statement_Hybrid_Execution.png b/src/current/images/v22.1/Parallel_Statement_Hybrid_Execution.png deleted file mode 100644 index a4edf85dc02..00000000000 Binary files a/src/current/images/v22.1/Parallel_Statement_Hybrid_Execution.png and /dev/null differ diff --git a/src/current/images/v22.1/Parallel_Statement_Normal_Execution.png b/src/current/images/v22.1/Parallel_Statement_Normal_Execution.png deleted file mode 100644 index df63ab1da01..00000000000 Binary files a/src/current/images/v22.1/Parallel_Statement_Normal_Execution.png and /dev/null differ diff --git a/src/current/images/v22.1/Sequential_Statement_Execution.png b/src/current/images/v22.1/Sequential_Statement_Execution.png deleted file mode 100644 index 99c47c51664..00000000000 Binary files a/src/current/images/v22.1/Sequential_Statement_Execution.png and /dev/null differ diff --git a/src/current/images/v22.1/after-decommission1.png b/src/current/images/v22.1/after-decommission1.png deleted file mode 100644 index bfd8bf0fdc9..00000000000 Binary files a/src/current/images/v22.1/after-decommission1.png and /dev/null differ diff --git a/src/current/images/v22.1/after-decommission2.png b/src/current/images/v22.1/after-decommission2.png deleted file mode 100644 index 1ef4117b7dd..00000000000 Binary files a/src/current/images/v22.1/after-decommission2.png and /dev/null differ diff --git a/src/current/images/v22.1/automated-operations1.png b/src/current/images/v22.1/automated-operations1.png deleted file mode 100644 index 64c6e51616c..00000000000 Binary files a/src/current/images/v22.1/automated-operations1.png and /dev/null differ diff --git a/src/current/images/v22.1/aws-architecture.png b/src/current/images/v22.1/aws-architecture.png deleted file mode 100644 index bbe4e3be595..00000000000 Binary files a/src/current/images/v22.1/aws-architecture.png and /dev/null differ diff --git a/src/current/images/v22.1/aws-dms-batchapplyenabled.png b/src/current/images/v22.1/aws-dms-batchapplyenabled.png deleted file mode 100644 index 94311a24039..00000000000 Binary files a/src/current/images/v22.1/aws-dms-batchapplyenabled.png and /dev/null differ diff --git a/src/current/images/v22.1/aws-dms-cloudwatch-logs.png b/src/current/images/v22.1/aws-dms-cloudwatch-logs.png deleted file mode 100644 index 9276b3adc54..00000000000 Binary files a/src/current/images/v22.1/aws-dms-cloudwatch-logs.png and /dev/null differ diff --git a/src/current/images/v22.1/aws-dms-create-db-migration-task.png b/src/current/images/v22.1/aws-dms-create-db-migration-task.png deleted file mode 100644 index 4c35fc784b2..00000000000 Binary files a/src/current/images/v22.1/aws-dms-create-db-migration-task.png and /dev/null differ diff --git a/src/current/images/v22.1/aws-dms-create-endpoint.png b/src/current/images/v22.1/aws-dms-create-endpoint.png deleted file mode 100644 index 4080b73b1a5..00000000000 Binary files a/src/current/images/v22.1/aws-dms-create-endpoint.png and /dev/null differ diff --git a/src/current/images/v22.1/aws-dms-endpoint-configuration.png b/src/current/images/v22.1/aws-dms-endpoint-configuration.png deleted file mode 100644 index 9170d9a517b..00000000000 Binary files a/src/current/images/v22.1/aws-dms-endpoint-configuration.png and /dev/null differ diff --git a/src/current/images/v22.1/aws-dms-reload-table-data.png b/src/current/images/v22.1/aws-dms-reload-table-data.png deleted file mode 100644 index d4760287783..00000000000 Binary files a/src/current/images/v22.1/aws-dms-reload-table-data.png and /dev/null differ diff --git a/src/current/images/v22.1/aws-dms-table-mappings.png b/src/current/images/v22.1/aws-dms-table-mappings.png deleted file mode 100644 index c302ae83823..00000000000 Binary files a/src/current/images/v22.1/aws-dms-table-mappings.png and /dev/null differ diff --git a/src/current/images/v22.1/aws-dms-task-configuration.png b/src/current/images/v22.1/aws-dms-task-configuration.png deleted file mode 100644 index 19d5fac9486..00000000000 Binary files a/src/current/images/v22.1/aws-dms-task-configuration.png and /dev/null differ diff --git a/src/current/images/v22.1/aws-dms-task-settings.png b/src/current/images/v22.1/aws-dms-task-settings.png deleted file mode 100644 index db0a1f5340e..00000000000 Binary files a/src/current/images/v22.1/aws-dms-task-settings.png and /dev/null differ diff --git a/src/current/images/v22.1/aws-dms-test-endpoint.png b/src/current/images/v22.1/aws-dms-test-endpoint.png deleted file mode 100644 index 8f94d6e0da8..00000000000 Binary files a/src/current/images/v22.1/aws-dms-test-endpoint.png and /dev/null differ diff --git a/src/current/images/v22.1/backup-overview.png b/src/current/images/v22.1/backup-overview.png deleted file mode 100644 index 4bba5638f9e..00000000000 Binary files a/src/current/images/v22.1/backup-overview.png and /dev/null differ diff --git a/src/current/images/v22.1/backup-processing.png b/src/current/images/v22.1/backup-processing.png deleted file mode 100644 index 84da7081065..00000000000 Binary files a/src/current/images/v22.1/backup-processing.png and /dev/null differ diff --git a/src/current/images/v22.1/before-decommission0.png b/src/current/images/v22.1/before-decommission0.png deleted file mode 100644 index 15b8ae207e4..00000000000 Binary files a/src/current/images/v22.1/before-decommission0.png and /dev/null differ diff --git a/src/current/images/v22.1/before-decommission1.png b/src/current/images/v22.1/before-decommission1.png deleted file mode 100644 index e189b01857a..00000000000 Binary files a/src/current/images/v22.1/before-decommission1.png and /dev/null differ diff --git a/src/current/images/v22.1/before-decommission2.png b/src/current/images/v22.1/before-decommission2.png deleted file mode 100644 index 1a2844eb45a..00000000000 Binary files a/src/current/images/v22.1/before-decommission2.png and /dev/null differ diff --git a/src/current/images/v22.1/certs_requests.png b/src/current/images/v22.1/certs_requests.png deleted file mode 100644 index cdddc80f7a7..00000000000 Binary files a/src/current/images/v22.1/certs_requests.png and /dev/null differ diff --git a/src/current/images/v22.1/certs_signing.png b/src/current/images/v22.1/certs_signing.png deleted file mode 100644 index e758b0004bd..00000000000 Binary files a/src/current/images/v22.1/certs_signing.png and /dev/null differ diff --git a/src/current/images/v22.1/changefeed-pubsub-output.png b/src/current/images/v22.1/changefeed-pubsub-output.png deleted file mode 100644 index b18f4f712d3..00000000000 Binary files a/src/current/images/v22.1/changefeed-pubsub-output.png and /dev/null differ diff --git a/src/current/images/v22.1/changefeed-structure.png b/src/current/images/v22.1/changefeed-structure.png deleted file mode 100644 index 81e4e689c88..00000000000 Binary files a/src/current/images/v22.1/changefeed-structure.png and /dev/null differ diff --git a/src/current/images/v22.1/cloudformation_admin_ui_live_node_count.png b/src/current/images/v22.1/cloudformation_admin_ui_live_node_count.png deleted file mode 100644 index 8e0016f5180..00000000000 Binary files a/src/current/images/v22.1/cloudformation_admin_ui_live_node_count.png and /dev/null differ diff --git a/src/current/images/v22.1/cloudformation_admin_ui_replicas.png b/src/current/images/v22.1/cloudformation_admin_ui_replicas.png deleted file mode 100644 index 9327b1004e4..00000000000 Binary files a/src/current/images/v22.1/cloudformation_admin_ui_replicas.png and /dev/null differ diff --git a/src/current/images/v22.1/cloudformation_admin_ui_sql_queries.png b/src/current/images/v22.1/cloudformation_admin_ui_sql_queries.png deleted file mode 100644 index 843d94b30f0..00000000000 Binary files a/src/current/images/v22.1/cloudformation_admin_ui_sql_queries.png and /dev/null differ diff --git a/src/current/images/v22.1/cluster-status-after-decommission1.png b/src/current/images/v22.1/cluster-status-after-decommission1.png deleted file mode 100644 index a3c559e2940..00000000000 Binary files a/src/current/images/v22.1/cluster-status-after-decommission1.png and /dev/null differ diff --git a/src/current/images/v22.1/cluster-status-after-decommission2.png b/src/current/images/v22.1/cluster-status-after-decommission2.png deleted file mode 100644 index fa37161ad0c..00000000000 Binary files a/src/current/images/v22.1/cluster-status-after-decommission2.png and /dev/null differ diff --git a/src/current/images/v22.1/cockroachdb-operator-delete-openshift.png b/src/current/images/v22.1/cockroachdb-operator-delete-openshift.png deleted file mode 100644 index ed40df94696..00000000000 Binary files a/src/current/images/v22.1/cockroachdb-operator-delete-openshift.png and /dev/null differ diff --git a/src/current/images/v22.1/cockroachdb-operator-instance-openshift.png b/src/current/images/v22.1/cockroachdb-operator-instance-openshift.png deleted file mode 100644 index 4d50f4862e1..00000000000 Binary files a/src/current/images/v22.1/cockroachdb-operator-instance-openshift.png and /dev/null differ diff --git a/src/current/images/v22.1/cockroachdb-operator-logs-openshift.png b/src/current/images/v22.1/cockroachdb-operator-logs-openshift.png deleted file mode 100644 index 1276e947480..00000000000 Binary files a/src/current/images/v22.1/cockroachdb-operator-logs-openshift.png and /dev/null differ diff --git a/src/current/images/v22.1/cockroachdb-operator-openshift.png b/src/current/images/v22.1/cockroachdb-operator-openshift.png deleted file mode 100644 index 620d919695e..00000000000 Binary files a/src/current/images/v22.1/cockroachdb-operator-openshift.png and /dev/null differ diff --git a/src/current/images/v22.1/cockroachdb-operator-pods-openshift.png b/src/current/images/v22.1/cockroachdb-operator-pods-openshift.png deleted file mode 100644 index d06fe236fc7..00000000000 Binary files a/src/current/images/v22.1/cockroachdb-operator-pods-openshift.png and /dev/null differ diff --git a/src/current/images/v22.1/concurrency.png b/src/current/images/v22.1/concurrency.png deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/src/current/images/v22.1/confluent-messages-screenshot.png b/src/current/images/v22.1/confluent-messages-screenshot.png deleted file mode 100644 index 952acd9bcfb..00000000000 Binary files a/src/current/images/v22.1/confluent-messages-screenshot.png and /dev/null differ diff --git a/src/current/images/v22.1/confluent-schemas-screenshot.png b/src/current/images/v22.1/confluent-schemas-screenshot.png deleted file mode 100644 index cb5143add40..00000000000 Binary files a/src/current/images/v22.1/confluent-schemas-screenshot.png and /dev/null differ diff --git a/src/current/images/v22.1/datadog-crdb-dashboard-list.png b/src/current/images/v22.1/datadog-crdb-dashboard-list.png deleted file mode 100644 index 00513dca6aa..00000000000 Binary files a/src/current/images/v22.1/datadog-crdb-dashboard-list.png and /dev/null differ diff --git a/src/current/images/v22.1/datadog-crdb-integration.png b/src/current/images/v22.1/datadog-crdb-integration.png deleted file mode 100644 index 7a14b561474..00000000000 Binary files a/src/current/images/v22.1/datadog-crdb-integration.png and /dev/null differ diff --git a/src/current/images/v22.1/datadog-crdb-overview-dashboard.png b/src/current/images/v22.1/datadog-crdb-overview-dashboard.png deleted file mode 100644 index f81a14a3d46..00000000000 Binary files a/src/current/images/v22.1/datadog-crdb-overview-dashboard.png and /dev/null differ diff --git a/src/current/images/v22.1/datadog-crdb-storage-alert.png b/src/current/images/v22.1/datadog-crdb-storage-alert.png deleted file mode 100644 index 8aedeb0a10c..00000000000 Binary files a/src/current/images/v22.1/datadog-crdb-storage-alert.png and /dev/null differ diff --git a/src/current/images/v22.1/datadog-crdb-threshold-alert.png b/src/current/images/v22.1/datadog-crdb-threshold-alert.png deleted file mode 100644 index 1461d4fcbf9..00000000000 Binary files a/src/current/images/v22.1/datadog-crdb-threshold-alert.png and /dev/null differ diff --git a/src/current/images/v22.1/datadog-crdb-workload-dashboard.png b/src/current/images/v22.1/datadog-crdb-workload-dashboard.png deleted file mode 100644 index 1999d14e4b9..00000000000 Binary files a/src/current/images/v22.1/datadog-crdb-workload-dashboard.png and /dev/null differ diff --git a/src/current/images/v22.1/dbeaver-01-select-cockroachdb.png b/src/current/images/v22.1/dbeaver-01-select-cockroachdb.png deleted file mode 100644 index b225e4c73d6..00000000000 Binary files a/src/current/images/v22.1/dbeaver-01-select-cockroachdb.png and /dev/null differ diff --git a/src/current/images/v22.1/dbeaver-02-cockroachdb-connection-settings.png b/src/current/images/v22.1/dbeaver-02-cockroachdb-connection-settings.png deleted file mode 100644 index c3985d14d83..00000000000 Binary files a/src/current/images/v22.1/dbeaver-02-cockroachdb-connection-settings.png and /dev/null differ diff --git a/src/current/images/v22.1/dbeaver-03-ssl-tab.png b/src/current/images/v22.1/dbeaver-03-ssl-tab.png deleted file mode 100644 index b04c1bcf01d..00000000000 Binary files a/src/current/images/v22.1/dbeaver-03-ssl-tab.png and /dev/null differ diff --git a/src/current/images/v22.1/dbeaver-04-connection-success-dialog.png b/src/current/images/v22.1/dbeaver-04-connection-success-dialog.png deleted file mode 100644 index a4143f0273d..00000000000 Binary files a/src/current/images/v22.1/dbeaver-04-connection-success-dialog.png and /dev/null differ diff --git a/src/current/images/v22.1/dbeaver-05-movr.png b/src/current/images/v22.1/dbeaver-05-movr.png deleted file mode 100644 index 84ad0e1c5a7..00000000000 Binary files a/src/current/images/v22.1/dbeaver-05-movr.png and /dev/null differ diff --git a/src/current/images/v22.1/dbeaver-06-download-driver.png b/src/current/images/v22.1/dbeaver-06-download-driver.png deleted file mode 100644 index 73f087e423a..00000000000 Binary files a/src/current/images/v22.1/dbeaver-06-download-driver.png and /dev/null differ diff --git a/src/current/images/v22.1/dbmarlin-crdb-dashboard.png b/src/current/images/v22.1/dbmarlin-crdb-dashboard.png deleted file mode 100644 index edc0a74ab1d..00000000000 Binary files a/src/current/images/v22.1/dbmarlin-crdb-dashboard.png and /dev/null differ diff --git a/src/current/images/v22.1/decommission-multiple1.png b/src/current/images/v22.1/decommission-multiple1.png deleted file mode 100644 index bef4ead69b3..00000000000 Binary files a/src/current/images/v22.1/decommission-multiple1.png and /dev/null differ diff --git a/src/current/images/v22.1/decommission-multiple2.png b/src/current/images/v22.1/decommission-multiple2.png deleted file mode 100644 index 9fb76750ab7..00000000000 Binary files a/src/current/images/v22.1/decommission-multiple2.png and /dev/null differ diff --git a/src/current/images/v22.1/decommission-multiple3.png b/src/current/images/v22.1/decommission-multiple3.png deleted file mode 100644 index b64191ff416..00000000000 Binary files a/src/current/images/v22.1/decommission-multiple3.png and /dev/null differ diff --git a/src/current/images/v22.1/decommission-multiple4.png b/src/current/images/v22.1/decommission-multiple4.png deleted file mode 100644 index eae15c3b778..00000000000 Binary files a/src/current/images/v22.1/decommission-multiple4.png and /dev/null differ diff --git a/src/current/images/v22.1/decommission-multiple5.png b/src/current/images/v22.1/decommission-multiple5.png deleted file mode 100644 index c231355d85a..00000000000 Binary files a/src/current/images/v22.1/decommission-multiple5.png and /dev/null differ diff --git a/src/current/images/v22.1/decommission-multiple6.png b/src/current/images/v22.1/decommission-multiple6.png deleted file mode 100644 index dfef9badc17..00000000000 Binary files a/src/current/images/v22.1/decommission-multiple6.png and /dev/null differ diff --git a/src/current/images/v22.1/decommission-multiple7.png b/src/current/images/v22.1/decommission-multiple7.png deleted file mode 100644 index 8378508c4ae..00000000000 Binary files a/src/current/images/v22.1/decommission-multiple7.png and /dev/null differ diff --git a/src/current/images/v22.1/decommission-scenario1.1.png b/src/current/images/v22.1/decommission-scenario1.1.png deleted file mode 100644 index a66389270de..00000000000 Binary files a/src/current/images/v22.1/decommission-scenario1.1.png and /dev/null differ diff --git a/src/current/images/v22.1/decommission-scenario1.2.png b/src/current/images/v22.1/decommission-scenario1.2.png deleted file mode 100644 index 9b33855e101..00000000000 Binary files a/src/current/images/v22.1/decommission-scenario1.2.png and /dev/null differ diff --git a/src/current/images/v22.1/decommission-scenario1.3.png b/src/current/images/v22.1/decommission-scenario1.3.png deleted file mode 100644 index 4c1175d956b..00000000000 Binary files a/src/current/images/v22.1/decommission-scenario1.3.png and /dev/null differ diff --git a/src/current/images/v22.1/decommission-scenario2.1.png b/src/current/images/v22.1/decommission-scenario2.1.png deleted file mode 100644 index 2fa8790c556..00000000000 Binary files a/src/current/images/v22.1/decommission-scenario2.1.png and /dev/null differ diff --git a/src/current/images/v22.1/decommission-scenario2.2.png b/src/current/images/v22.1/decommission-scenario2.2.png deleted file mode 100644 index 391b8e24c0f..00000000000 Binary files a/src/current/images/v22.1/decommission-scenario2.2.png and /dev/null differ diff --git a/src/current/images/v22.1/decommission-scenario3.1.png b/src/current/images/v22.1/decommission-scenario3.1.png deleted file mode 100644 index db682df3d78..00000000000 Binary files a/src/current/images/v22.1/decommission-scenario3.1.png and /dev/null differ diff --git a/src/current/images/v22.1/decommission-scenario3.2.png b/src/current/images/v22.1/decommission-scenario3.2.png deleted file mode 100644 index 3571bd0b83e..00000000000 Binary files a/src/current/images/v22.1/decommission-scenario3.2.png and /dev/null differ diff --git a/src/current/images/v22.1/decommission-scenario3.3.png b/src/current/images/v22.1/decommission-scenario3.3.png deleted file mode 100644 index 45f61d9bd18..00000000000 Binary files a/src/current/images/v22.1/decommission-scenario3.3.png and /dev/null differ diff --git a/src/current/images/v22.1/explain-distsql-plan.png b/src/current/images/v22.1/explain-distsql-plan.png deleted file mode 100644 index 39f782384db..00000000000 Binary files a/src/current/images/v22.1/explain-distsql-plan.png and /dev/null differ diff --git a/src/current/images/v22.1/explain-distsql-types-plan.png b/src/current/images/v22.1/explain-distsql-types-plan.png deleted file mode 100644 index 7f75deee895..00000000000 Binary files a/src/current/images/v22.1/explain-distsql-types-plan.png and /dev/null differ diff --git a/src/current/images/v22.1/fault-tolerance-1.png b/src/current/images/v22.1/fault-tolerance-1.png deleted file mode 100644 index be0d216321e..00000000000 Binary files a/src/current/images/v22.1/fault-tolerance-1.png and /dev/null differ diff --git a/src/current/images/v22.1/fault-tolerance-2.png b/src/current/images/v22.1/fault-tolerance-2.png deleted file mode 100644 index fe4aca002e2..00000000000 Binary files a/src/current/images/v22.1/fault-tolerance-2.png and /dev/null differ diff --git a/src/current/images/v22.1/fault-tolerance-3.png b/src/current/images/v22.1/fault-tolerance-3.png deleted file mode 100644 index c3e76e18c39..00000000000 Binary files a/src/current/images/v22.1/fault-tolerance-3.png and /dev/null differ diff --git a/src/current/images/v22.1/fault-tolerance-4.png b/src/current/images/v22.1/fault-tolerance-4.png deleted file mode 100644 index 1d30a4ba71b..00000000000 Binary files a/src/current/images/v22.1/fault-tolerance-4.png and /dev/null differ diff --git a/src/current/images/v22.1/fault-tolerance-5.png b/src/current/images/v22.1/fault-tolerance-5.png deleted file mode 100644 index c409ec3d5f8..00000000000 Binary files a/src/current/images/v22.1/fault-tolerance-5.png and /dev/null differ diff --git a/src/current/images/v22.1/fault-tolerance-6.png b/src/current/images/v22.1/fault-tolerance-6.png deleted file mode 100644 index 431e98efe61..00000000000 Binary files a/src/current/images/v22.1/fault-tolerance-6.png and /dev/null differ diff --git a/src/current/images/v22.1/fault-tolerance-7.png b/src/current/images/v22.1/fault-tolerance-7.png deleted file mode 100644 index 5191408e656..00000000000 Binary files a/src/current/images/v22.1/fault-tolerance-7.png and /dev/null differ diff --git a/src/current/images/v22.1/fault-tolerance-8.png b/src/current/images/v22.1/fault-tolerance-8.png deleted file mode 100644 index 046890c8ada..00000000000 Binary files a/src/current/images/v22.1/fault-tolerance-8.png and /dev/null differ diff --git a/src/current/images/v22.1/fault-tolerance-9.png b/src/current/images/v22.1/fault-tolerance-9.png deleted file mode 100644 index e8d321e56db..00000000000 Binary files a/src/current/images/v22.1/fault-tolerance-9.png and /dev/null differ diff --git a/src/current/images/v22.1/follow-workload-1.png b/src/current/images/v22.1/follow-workload-1.png deleted file mode 100644 index a58fcb2e5ed..00000000000 Binary files a/src/current/images/v22.1/follow-workload-1.png and /dev/null differ diff --git a/src/current/images/v22.1/follow-workload-2.png b/src/current/images/v22.1/follow-workload-2.png deleted file mode 100644 index 47d83c5d4d6..00000000000 Binary files a/src/current/images/v22.1/follow-workload-2.png and /dev/null differ diff --git a/src/current/images/v22.1/follow-workload-network-latency.png b/src/current/images/v22.1/follow-workload-network-latency.png deleted file mode 100644 index a3669e56660..00000000000 Binary files a/src/current/images/v22.1/follow-workload-network-latency.png and /dev/null differ diff --git a/src/current/images/v22.1/geo-partitioning-cluster-topology.png b/src/current/images/v22.1/geo-partitioning-cluster-topology.png deleted file mode 100644 index ec4ce6d5416..00000000000 Binary files a/src/current/images/v22.1/geo-partitioning-cluster-topology.png and /dev/null differ diff --git a/src/current/images/v22.1/geo-partitioning-network-latency.png b/src/current/images/v22.1/geo-partitioning-network-latency.png deleted file mode 100644 index 6c1d94c6749..00000000000 Binary files a/src/current/images/v22.1/geo-partitioning-network-latency.png and /dev/null differ diff --git a/src/current/images/v22.1/geo-partitioning-node-map-1.png b/src/current/images/v22.1/geo-partitioning-node-map-1.png deleted file mode 100644 index a93d49dc118..00000000000 Binary files a/src/current/images/v22.1/geo-partitioning-node-map-1.png and /dev/null differ diff --git a/src/current/images/v22.1/geo-partitioning-node-map-2.png b/src/current/images/v22.1/geo-partitioning-node-map-2.png deleted file mode 100644 index 9600dcd7b59..00000000000 Binary files a/src/current/images/v22.1/geo-partitioning-node-map-2.png and /dev/null differ diff --git a/src/current/images/v22.1/geo-partitioning-node-map-3.png b/src/current/images/v22.1/geo-partitioning-node-map-3.png deleted file mode 100644 index 579959153d4..00000000000 Binary files a/src/current/images/v22.1/geo-partitioning-node-map-3.png and /dev/null differ diff --git a/src/current/images/v22.1/geo-partitioning-resiliency-1.png b/src/current/images/v22.1/geo-partitioning-resiliency-1.png deleted file mode 100644 index 08c7407af90..00000000000 Binary files a/src/current/images/v22.1/geo-partitioning-resiliency-1.png and /dev/null differ diff --git a/src/current/images/v22.1/geo-partitioning-resiliency-2.png b/src/current/images/v22.1/geo-partitioning-resiliency-2.png deleted file mode 100644 index 4217566ae57..00000000000 Binary files a/src/current/images/v22.1/geo-partitioning-resiliency-2.png and /dev/null differ diff --git a/src/current/images/v22.1/geo-partitioning-sql-latency-after-1.png b/src/current/images/v22.1/geo-partitioning-sql-latency-after-1.png deleted file mode 100644 index c2c2257971f..00000000000 Binary files a/src/current/images/v22.1/geo-partitioning-sql-latency-after-1.png and /dev/null differ diff --git a/src/current/images/v22.1/geo-partitioning-sql-latency-after-2.png b/src/current/images/v22.1/geo-partitioning-sql-latency-after-2.png deleted file mode 100644 index 60e43d36f09..00000000000 Binary files a/src/current/images/v22.1/geo-partitioning-sql-latency-after-2.png and /dev/null differ diff --git a/src/current/images/v22.1/geo-partitioning-sql-latency-after-3.png b/src/current/images/v22.1/geo-partitioning-sql-latency-after-3.png deleted file mode 100644 index 7c6b0dfae2d..00000000000 Binary files a/src/current/images/v22.1/geo-partitioning-sql-latency-after-3.png and /dev/null differ diff --git a/src/current/images/v22.1/geo-partitioning-sql-latency-before.png b/src/current/images/v22.1/geo-partitioning-sql-latency-before.png deleted file mode 100644 index e7b63c2a9af..00000000000 Binary files a/src/current/images/v22.1/geo-partitioning-sql-latency-before.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/1999-oklahoma-tornado-outbreak-map.png b/src/current/images/v22.1/geospatial/1999-oklahoma-tornado-outbreak-map.png deleted file mode 100644 index 04151e9aad3..00000000000 Binary files a/src/current/images/v22.1/geospatial/1999-oklahoma-tornado-outbreak-map.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/geojson_example.png b/src/current/images/v22.1/geospatial/geojson_example.png deleted file mode 100644 index e3686108ef2..00000000000 Binary files a/src/current/images/v22.1/geospatial/geojson_example.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/geoserver-us-atlas-00.png b/src/current/images/v22.1/geospatial/geoserver-us-atlas-00.png deleted file mode 100644 index cc641fee45c..00000000000 Binary files a/src/current/images/v22.1/geospatial/geoserver-us-atlas-00.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/quadtree.png b/src/current/images/v22.1/geospatial/quadtree.png deleted file mode 100644 index 6a8512295b8..00000000000 Binary files a/src/current/images/v22.1/geospatial/quadtree.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/s2-coverings-tiled.png b/src/current/images/v22.1/geospatial/s2-coverings-tiled.png deleted file mode 100644 index b9e9a3386a5..00000000000 Binary files a/src/current/images/v22.1/geospatial/s2-coverings-tiled.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/s2-coverings.gif b/src/current/images/v22.1/geospatial/s2-coverings.gif deleted file mode 100644 index 1dbb41d6d7c..00000000000 Binary files a/src/current/images/v22.1/geospatial/s2-coverings.gif and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/s2-cubed-sphere-2d.png b/src/current/images/v22.1/geospatial/s2-cubed-sphere-2d.png deleted file mode 100644 index 84c9ce2af4d..00000000000 Binary files a/src/current/images/v22.1/geospatial/s2-cubed-sphere-2d.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/s2-cubed-sphere-3d.png b/src/current/images/v22.1/geospatial/s2-cubed-sphere-3d.png deleted file mode 100644 index 9c0a640679d..00000000000 Binary files a/src/current/images/v22.1/geospatial/s2-cubed-sphere-3d.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/s2-linestring-example-covering.png b/src/current/images/v22.1/geospatial/s2-linestring-example-covering.png deleted file mode 100644 index 16acdc959cb..00000000000 Binary files a/src/current/images/v22.1/geospatial/s2-linestring-example-covering.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/st_contains_false.png b/src/current/images/v22.1/geospatial/st_contains_false.png deleted file mode 100644 index e1fe169db17..00000000000 Binary files a/src/current/images/v22.1/geospatial/st_contains_false.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/st_contains_true.png b/src/current/images/v22.1/geospatial/st_contains_true.png deleted file mode 100644 index 45e822f2de4..00000000000 Binary files a/src/current/images/v22.1/geospatial/st_contains_true.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/st_convexhull.png b/src/current/images/v22.1/geospatial/st_convexhull.png deleted file mode 100644 index 06fe05900c4..00000000000 Binary files a/src/current/images/v22.1/geospatial/st_convexhull.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/st_coveredby_false.png b/src/current/images/v22.1/geospatial/st_coveredby_false.png deleted file mode 100644 index 15c99ff540d..00000000000 Binary files a/src/current/images/v22.1/geospatial/st_coveredby_false.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/st_coveredby_true.png b/src/current/images/v22.1/geospatial/st_coveredby_true.png deleted file mode 100644 index 41f100079b5..00000000000 Binary files a/src/current/images/v22.1/geospatial/st_coveredby_true.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/st_covers_false.png b/src/current/images/v22.1/geospatial/st_covers_false.png deleted file mode 100644 index a074753965e..00000000000 Binary files a/src/current/images/v22.1/geospatial/st_covers_false.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/st_covers_true.png b/src/current/images/v22.1/geospatial/st_covers_true.png deleted file mode 100644 index 325310e24b5..00000000000 Binary files a/src/current/images/v22.1/geospatial/st_covers_true.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/st_disjoint_false.png b/src/current/images/v22.1/geospatial/st_disjoint_false.png deleted file mode 100644 index 9512163c154..00000000000 Binary files a/src/current/images/v22.1/geospatial/st_disjoint_false.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/st_disjoint_true.png b/src/current/images/v22.1/geospatial/st_disjoint_true.png deleted file mode 100644 index bd80b9d4f66..00000000000 Binary files a/src/current/images/v22.1/geospatial/st_disjoint_true.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/st_equals_false.png b/src/current/images/v22.1/geospatial/st_equals_false.png deleted file mode 100644 index 9b0e60f7800..00000000000 Binary files a/src/current/images/v22.1/geospatial/st_equals_false.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/st_equals_true.png b/src/current/images/v22.1/geospatial/st_equals_true.png deleted file mode 100644 index b4609d3a922..00000000000 Binary files a/src/current/images/v22.1/geospatial/st_equals_true.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/st_intersects_false.png b/src/current/images/v22.1/geospatial/st_intersects_false.png deleted file mode 100644 index dd8feb64deb..00000000000 Binary files a/src/current/images/v22.1/geospatial/st_intersects_false.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/st_intersects_true.png b/src/current/images/v22.1/geospatial/st_intersects_true.png deleted file mode 100644 index 8b4c7777717..00000000000 Binary files a/src/current/images/v22.1/geospatial/st_intersects_true.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/st_overlaps_false.png b/src/current/images/v22.1/geospatial/st_overlaps_false.png deleted file mode 100644 index 4c305c9ab11..00000000000 Binary files a/src/current/images/v22.1/geospatial/st_overlaps_false.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/st_overlaps_true.png b/src/current/images/v22.1/geospatial/st_overlaps_true.png deleted file mode 100644 index 83b69bfa00e..00000000000 Binary files a/src/current/images/v22.1/geospatial/st_overlaps_true.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/st_touches_false.png b/src/current/images/v22.1/geospatial/st_touches_false.png deleted file mode 100644 index 40e06b3e2a6..00000000000 Binary files a/src/current/images/v22.1/geospatial/st_touches_false.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/st_touches_true.png b/src/current/images/v22.1/geospatial/st_touches_true.png deleted file mode 100644 index 4fe877f5435..00000000000 Binary files a/src/current/images/v22.1/geospatial/st_touches_true.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/st_union.png b/src/current/images/v22.1/geospatial/st_union.png deleted file mode 100644 index baf3357b31d..00000000000 Binary files a/src/current/images/v22.1/geospatial/st_union.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/st_within_false.png b/src/current/images/v22.1/geospatial/st_within_false.png deleted file mode 100644 index 390467e6349..00000000000 Binary files a/src/current/images/v22.1/geospatial/st_within_false.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/st_within_true.png b/src/current/images/v22.1/geospatial/st_within_true.png deleted file mode 100644 index e6199a7c3f4..00000000000 Binary files a/src/current/images/v22.1/geospatial/st_within_true.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/tutorial/er-birds.png b/src/current/images/v22.1/geospatial/tutorial/er-birds.png deleted file mode 100644 index 29aec5acdb8..00000000000 Binary files a/src/current/images/v22.1/geospatial/tutorial/er-birds.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/tutorial/er-bookstores.png b/src/current/images/v22.1/geospatial/tutorial/er-bookstores.png deleted file mode 100644 index 836c4315766..00000000000 Binary files a/src/current/images/v22.1/geospatial/tutorial/er-bookstores.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/tutorial/er-roads.png b/src/current/images/v22.1/geospatial/tutorial/er-roads.png deleted file mode 100644 index 3bc342c7b0c..00000000000 Binary files a/src/current/images/v22.1/geospatial/tutorial/er-roads.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/tutorial/query-01.png b/src/current/images/v22.1/geospatial/tutorial/query-01.png deleted file mode 100644 index ed0f2f3a19c..00000000000 Binary files a/src/current/images/v22.1/geospatial/tutorial/query-01.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/tutorial/query-09.png b/src/current/images/v22.1/geospatial/tutorial/query-09.png deleted file mode 100644 index c9007e58e7c..00000000000 Binary files a/src/current/images/v22.1/geospatial/tutorial/query-09.png and /dev/null differ diff --git a/src/current/images/v22.1/geospatial/tutorial/query-12.png b/src/current/images/v22.1/geospatial/tutorial/query-12.png deleted file mode 100644 index 29840a09602..00000000000 Binary files a/src/current/images/v22.1/geospatial/tutorial/query-12.png and /dev/null differ diff --git a/src/current/images/v22.1/google-oidc-client.png b/src/current/images/v22.1/google-oidc-client.png deleted file mode 100644 index 58e3f6e6317..00000000000 Binary files a/src/current/images/v22.1/google-oidc-client.png and /dev/null differ diff --git a/src/current/images/v22.1/icon_info.svg b/src/current/images/v22.1/icon_info.svg deleted file mode 100644 index 57aac994733..00000000000 --- a/src/current/images/v22.1/icon_info.svg +++ /dev/null @@ -1,4 +0,0 @@ - - - - \ No newline at end of file diff --git a/src/current/images/v22.1/iterm2-configuration.png b/src/current/images/v22.1/iterm2-configuration.png deleted file mode 100644 index 0140772e44f..00000000000 Binary files a/src/current/images/v22.1/iterm2-configuration.png and /dev/null differ diff --git a/src/current/images/v22.1/jaeger-cockroachdb.png b/src/current/images/v22.1/jaeger-cockroachdb.png deleted file mode 100644 index f8c25046667..00000000000 Binary files a/src/current/images/v22.1/jaeger-cockroachdb.png and /dev/null differ diff --git a/src/current/images/v22.1/jaeger-trace-json.png b/src/current/images/v22.1/jaeger-trace-json.png deleted file mode 100644 index 8b8cc3062d1..00000000000 Binary files a/src/current/images/v22.1/jaeger-trace-json.png and /dev/null differ diff --git a/src/current/images/v22.1/jaeger-trace-log-messages.png b/src/current/images/v22.1/jaeger-trace-log-messages.png deleted file mode 100644 index 779443d7fb6..00000000000 Binary files a/src/current/images/v22.1/jaeger-trace-log-messages.png and /dev/null differ diff --git a/src/current/images/v22.1/jaeger-trace-spans.png b/src/current/images/v22.1/jaeger-trace-spans.png deleted file mode 100644 index 5f53ce05175..00000000000 Binary files a/src/current/images/v22.1/jaeger-trace-spans.png and /dev/null differ diff --git a/src/current/images/v22.1/jaeger-trace-transaction-contention.png b/src/current/images/v22.1/jaeger-trace-transaction-contention.png deleted file mode 100644 index 565dfdb1962..00000000000 Binary files a/src/current/images/v22.1/jaeger-trace-transaction-contention.png and /dev/null differ diff --git a/src/current/images/v22.1/kibana-crdb-dashboard-selection.png b/src/current/images/v22.1/kibana-crdb-dashboard-selection.png deleted file mode 100644 index 073b772d83f..00000000000 Binary files a/src/current/images/v22.1/kibana-crdb-dashboard-selection.png and /dev/null differ diff --git a/src/current/images/v22.1/kibana-crdb-dashboard-sql.png b/src/current/images/v22.1/kibana-crdb-dashboard-sql.png deleted file mode 100644 index 596f19ac35e..00000000000 Binary files a/src/current/images/v22.1/kibana-crdb-dashboard-sql.png and /dev/null differ diff --git a/src/current/images/v22.1/kibana-crdb-dashboard.png b/src/current/images/v22.1/kibana-crdb-dashboard.png deleted file mode 100644 index c1e47181a17..00000000000 Binary files a/src/current/images/v22.1/kibana-crdb-dashboard.png and /dev/null differ diff --git a/src/current/images/v22.1/kubernetes-alertmanager-home.png b/src/current/images/v22.1/kubernetes-alertmanager-home.png deleted file mode 100644 index 8d1272b27a6..00000000000 Binary files a/src/current/images/v22.1/kubernetes-alertmanager-home.png and /dev/null differ diff --git a/src/current/images/v22.1/kubernetes-prometheus-alertmanagers.png b/src/current/images/v22.1/kubernetes-prometheus-alertmanagers.png deleted file mode 100644 index c6ee9f3db79..00000000000 Binary files a/src/current/images/v22.1/kubernetes-prometheus-alertmanagers.png and /dev/null differ diff --git a/src/current/images/v22.1/kubernetes-prometheus-alertrules.png b/src/current/images/v22.1/kubernetes-prometheus-alertrules.png deleted file mode 100644 index edb19e06254..00000000000 Binary files a/src/current/images/v22.1/kubernetes-prometheus-alertrules.png and /dev/null differ diff --git a/src/current/images/v22.1/kubernetes-prometheus-alerts.png b/src/current/images/v22.1/kubernetes-prometheus-alerts.png deleted file mode 100644 index ed0e89aac8d..00000000000 Binary files a/src/current/images/v22.1/kubernetes-prometheus-alerts.png and /dev/null differ diff --git a/src/current/images/v22.1/kubernetes-prometheus-graph.png b/src/current/images/v22.1/kubernetes-prometheus-graph.png deleted file mode 100644 index 9822717cefc..00000000000 Binary files a/src/current/images/v22.1/kubernetes-prometheus-graph.png and /dev/null differ diff --git a/src/current/images/v22.1/kubernetes-prometheus-targets.png b/src/current/images/v22.1/kubernetes-prometheus-targets.png deleted file mode 100644 index 5e4b917eeb8..00000000000 Binary files a/src/current/images/v22.1/kubernetes-prometheus-targets.png and /dev/null differ diff --git a/src/current/images/v22.1/kubernetes-upgrade.png b/src/current/images/v22.1/kubernetes-upgrade.png deleted file mode 100644 index 497559cef73..00000000000 Binary files a/src/current/images/v22.1/kubernetes-upgrade.png and /dev/null differ diff --git a/src/current/images/v22.1/linearscale.png b/src/current/images/v22.1/linearscale.png deleted file mode 100644 index 9843a713461..00000000000 Binary files a/src/current/images/v22.1/linearscale.png and /dev/null differ diff --git a/src/current/images/v22.1/lsm-with-ssts.png b/src/current/images/v22.1/lsm-with-ssts.png deleted file mode 100644 index 5abcc467127..00000000000 Binary files a/src/current/images/v22.1/lsm-with-ssts.png and /dev/null differ diff --git a/src/current/images/v22.1/memtable-wal-sst.png b/src/current/images/v22.1/memtable-wal-sst.png deleted file mode 100644 index a4f31108a47..00000000000 Binary files a/src/current/images/v22.1/memtable-wal-sst.png and /dev/null differ diff --git a/src/current/images/v22.1/movr-app.png b/src/current/images/v22.1/movr-app.png deleted file mode 100644 index e520da1fcee..00000000000 Binary files a/src/current/images/v22.1/movr-app.png and /dev/null differ diff --git a/src/current/images/v22.1/movr-schema.png b/src/current/images/v22.1/movr-schema.png deleted file mode 100644 index 6f50ec28958..00000000000 Binary files a/src/current/images/v22.1/movr-schema.png and /dev/null differ diff --git a/src/current/images/v22.1/movr-statements-rides.png b/src/current/images/v22.1/movr-statements-rides.png deleted file mode 100644 index 9b3f5149ba7..00000000000 Binary files a/src/current/images/v22.1/movr-statements-rides.png and /dev/null differ diff --git a/src/current/images/v22.1/movr-transactions-rides.png b/src/current/images/v22.1/movr-transactions-rides.png deleted file mode 100644 index 9357c7eeeac..00000000000 Binary files a/src/current/images/v22.1/movr-transactions-rides.png and /dev/null differ diff --git a/src/current/images/v22.1/movr_v2.png b/src/current/images/v22.1/movr_v2.png deleted file mode 100644 index b9cd96bbcf3..00000000000 Binary files a/src/current/images/v22.1/movr_v2.png and /dev/null differ diff --git a/src/current/images/v22.1/node-map.png b/src/current/images/v22.1/node-map.png deleted file mode 100644 index 8bca2cec28c..00000000000 Binary files a/src/current/images/v22.1/node-map.png and /dev/null differ diff --git a/src/current/images/v22.1/parallel-commits-00.png b/src/current/images/v22.1/parallel-commits-00.png deleted file mode 100644 index 29d3573db87..00000000000 Binary files a/src/current/images/v22.1/parallel-commits-00.png and /dev/null differ diff --git a/src/current/images/v22.1/parallel-commits-01.png b/src/current/images/v22.1/parallel-commits-01.png deleted file mode 100644 index c58603cc36c..00000000000 Binary files a/src/current/images/v22.1/parallel-commits-01.png and /dev/null differ diff --git a/src/current/images/v22.1/parallel-commits-02.png b/src/current/images/v22.1/parallel-commits-02.png deleted file mode 100644 index a7198424b84..00000000000 Binary files a/src/current/images/v22.1/parallel-commits-02.png and /dev/null differ diff --git a/src/current/images/v22.1/parallel-commits-03.png b/src/current/images/v22.1/parallel-commits-03.png deleted file mode 100644 index d1f0b6ab0f9..00000000000 Binary files a/src/current/images/v22.1/parallel-commits-03.png and /dev/null differ diff --git a/src/current/images/v22.1/parallel-commits-04.png b/src/current/images/v22.1/parallel-commits-04.png deleted file mode 100644 index 2a518a1777e..00000000000 Binary files a/src/current/images/v22.1/parallel-commits-04.png and /dev/null differ diff --git a/src/current/images/v22.1/parallel-commits-05.png b/src/current/images/v22.1/parallel-commits-05.png deleted file mode 100644 index 777c36b9614..00000000000 Binary files a/src/current/images/v22.1/parallel-commits-05.png and /dev/null differ diff --git a/src/current/images/v22.1/perf_tuning_concepts1.png b/src/current/images/v22.1/perf_tuning_concepts1.png deleted file mode 100644 index 3a086a41c26..00000000000 Binary files a/src/current/images/v22.1/perf_tuning_concepts1.png and /dev/null differ diff --git a/src/current/images/v22.1/perf_tuning_concepts2.png b/src/current/images/v22.1/perf_tuning_concepts2.png deleted file mode 100644 index d67b8f253f8..00000000000 Binary files a/src/current/images/v22.1/perf_tuning_concepts2.png and /dev/null differ diff --git a/src/current/images/v22.1/perf_tuning_concepts3.png b/src/current/images/v22.1/perf_tuning_concepts3.png deleted file mode 100644 index 46d666be55d..00000000000 Binary files a/src/current/images/v22.1/perf_tuning_concepts3.png and /dev/null differ diff --git a/src/current/images/v22.1/perf_tuning_concepts4.png b/src/current/images/v22.1/perf_tuning_concepts4.png deleted file mode 100644 index b60b19e01bf..00000000000 Binary files a/src/current/images/v22.1/perf_tuning_concepts4.png and /dev/null differ diff --git a/src/current/images/v22.1/perf_tuning_movr_schema.png b/src/current/images/v22.1/perf_tuning_movr_schema.png deleted file mode 100644 index 262adc18b75..00000000000 Binary files a/src/current/images/v22.1/perf_tuning_movr_schema.png and /dev/null differ diff --git a/src/current/images/v22.1/perf_tuning_multi_region_rebalancing.png b/src/current/images/v22.1/perf_tuning_multi_region_rebalancing.png deleted file mode 100644 index 7064e3962db..00000000000 Binary files a/src/current/images/v22.1/perf_tuning_multi_region_rebalancing.png and /dev/null differ diff --git a/src/current/images/v22.1/perf_tuning_multi_region_rebalancing_after_partitioning.png b/src/current/images/v22.1/perf_tuning_multi_region_rebalancing_after_partitioning.png deleted file mode 100644 index 433c0f8ba03..00000000000 Binary files a/src/current/images/v22.1/perf_tuning_multi_region_rebalancing_after_partitioning.png and /dev/null differ diff --git a/src/current/images/v22.1/perf_tuning_multi_region_topology.png b/src/current/images/v22.1/perf_tuning_multi_region_topology.png deleted file mode 100644 index fe64c322ca0..00000000000 Binary files a/src/current/images/v22.1/perf_tuning_multi_region_topology.png and /dev/null differ diff --git a/src/current/images/v22.1/perf_tuning_single_region_topology.png b/src/current/images/v22.1/perf_tuning_single_region_topology.png deleted file mode 100644 index 4dfca364929..00000000000 Binary files a/src/current/images/v22.1/perf_tuning_single_region_topology.png and /dev/null differ diff --git a/src/current/images/v22.1/pki_auth.png b/src/current/images/v22.1/pki_auth.png deleted file mode 100644 index 24ae9e4f560..00000000000 Binary files a/src/current/images/v22.1/pki_auth.png and /dev/null differ diff --git a/src/current/images/v22.1/pki_core.png b/src/current/images/v22.1/pki_core.png deleted file mode 100644 index 0421199768c..00000000000 Binary files a/src/current/images/v22.1/pki_core.png and /dev/null differ diff --git a/src/current/images/v22.1/pki_signing.png b/src/current/images/v22.1/pki_signing.png deleted file mode 100644 index 3448f37663a..00000000000 Binary files a/src/current/images/v22.1/pki_signing.png and /dev/null differ diff --git a/src/current/images/v22.1/range-lookup.png b/src/current/images/v22.1/range-lookup.png deleted file mode 100644 index fc88139d8d9..00000000000 Binary files a/src/current/images/v22.1/range-lookup.png and /dev/null differ diff --git a/src/current/images/v22.1/raw-status-endpoints.png b/src/current/images/v22.1/raw-status-endpoints.png deleted file mode 100644 index a893911fa87..00000000000 Binary files a/src/current/images/v22.1/raw-status-endpoints.png and /dev/null differ diff --git a/src/current/images/v22.1/recovery1.png b/src/current/images/v22.1/recovery1.png deleted file mode 100644 index 8a14f7e965a..00000000000 Binary files a/src/current/images/v22.1/recovery1.png and /dev/null differ diff --git a/src/current/images/v22.1/recovery2.png b/src/current/images/v22.1/recovery2.png deleted file mode 100644 index 7ec3fed2adc..00000000000 Binary files a/src/current/images/v22.1/recovery2.png and /dev/null differ diff --git a/src/current/images/v22.1/recovery3.png b/src/current/images/v22.1/recovery3.png deleted file mode 100644 index a82da79f64a..00000000000 Binary files a/src/current/images/v22.1/recovery3.png and /dev/null differ diff --git a/src/current/images/v22.1/remove-dead-node1.png b/src/current/images/v22.1/remove-dead-node1.png deleted file mode 100644 index 7a303df9bc3..00000000000 Binary files a/src/current/images/v22.1/remove-dead-node1.png and /dev/null differ diff --git a/src/current/images/v22.1/replication1.png b/src/current/images/v22.1/replication1.png deleted file mode 100644 index fa4625844f5..00000000000 Binary files a/src/current/images/v22.1/replication1.png and /dev/null differ diff --git a/src/current/images/v22.1/replication2.png b/src/current/images/v22.1/replication2.png deleted file mode 100644 index e05d2ecf019..00000000000 Binary files a/src/current/images/v22.1/replication2.png and /dev/null differ diff --git a/src/current/images/v22.1/scalability1.png b/src/current/images/v22.1/scalability1.png deleted file mode 100644 index 7a70afb6d6a..00000000000 Binary files a/src/current/images/v22.1/scalability1.png and /dev/null differ diff --git a/src/current/images/v22.1/scalability2.png b/src/current/images/v22.1/scalability2.png deleted file mode 100644 index 400748466c6..00000000000 Binary files a/src/current/images/v22.1/scalability2.png and /dev/null differ diff --git a/src/current/images/v22.1/serializable_schema.png b/src/current/images/v22.1/serializable_schema.png deleted file mode 100644 index 7e8b4e324c6..00000000000 Binary files a/src/current/images/v22.1/serializable_schema.png and /dev/null differ diff --git a/src/current/images/v22.1/sst.png b/src/current/images/v22.1/sst.png deleted file mode 100644 index 57dd1e55e96..00000000000 Binary files a/src/current/images/v22.1/sst.png and /dev/null differ diff --git a/src/current/images/v22.1/terminal-configuration.png b/src/current/images/v22.1/terminal-configuration.png deleted file mode 100644 index 4dcb35451fc..00000000000 Binary files a/src/current/images/v22.1/terminal-configuration.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_basic_production1.png b/src/current/images/v22.1/topology-patterns/topology_basic_production1.png deleted file mode 100644 index b96d185197b..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_basic_production1.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_basic_production2.png b/src/current/images/v22.1/topology-patterns/topology_basic_production2.png deleted file mode 100644 index 22359506c75..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_basic_production2.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_basic_production_reads.png b/src/current/images/v22.1/topology-patterns/topology_basic_production_reads.png deleted file mode 100644 index fd6b9a35e40..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_basic_production_reads.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_basic_production_resiliency1.png b/src/current/images/v22.1/topology-patterns/topology_basic_production_resiliency1.png deleted file mode 100644 index 218e3443668..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_basic_production_resiliency1.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_basic_production_resiliency2.png b/src/current/images/v22.1/topology-patterns/topology_basic_production_resiliency2.png deleted file mode 100644 index a9efce59a67..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_basic_production_resiliency2.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_basic_production_resiliency3.png b/src/current/images/v22.1/topology-patterns/topology_basic_production_resiliency3.png deleted file mode 100644 index 3c3fd57b457..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_basic_production_resiliency3.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_basic_production_writes.gif b/src/current/images/v22.1/topology-patterns/topology_basic_production_writes.gif deleted file mode 100644 index 5f12f331e7f..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_basic_production_writes.gif and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_development1.png b/src/current/images/v22.1/topology-patterns/topology_development1.png deleted file mode 100644 index 2882937e438..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_development1.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_development2.png b/src/current/images/v22.1/topology-patterns/topology_development2.png deleted file mode 100644 index 1eed95fbaba..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_development2.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_development_latency.png b/src/current/images/v22.1/topology-patterns/topology_development_latency.png deleted file mode 100644 index 3aa54c45c13..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_development_latency.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_duplicate_indexes1.png b/src/current/images/v22.1/topology-patterns/topology_duplicate_indexes1.png deleted file mode 100644 index c9ad5d97fa3..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_duplicate_indexes1.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_duplicate_indexes_reads.png b/src/current/images/v22.1/topology-patterns/topology_duplicate_indexes_reads.png deleted file mode 100644 index 097927ea410..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_duplicate_indexes_reads.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_duplicate_indexes_resiliency.png b/src/current/images/v22.1/topology-patterns/topology_duplicate_indexes_resiliency.png deleted file mode 100644 index 39056e22a48..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_duplicate_indexes_resiliency.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_duplicate_indexes_writes.gif b/src/current/images/v22.1/topology-patterns/topology_duplicate_indexes_writes.gif deleted file mode 100644 index 16433549cb4..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_duplicate_indexes_writes.gif and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_follow_the_workload_reads.png b/src/current/images/v22.1/topology-patterns/topology_follow_the_workload_reads.png deleted file mode 100644 index 67b01da4d37..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_follow_the_workload_reads.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_follow_the_workload_writes.gif b/src/current/images/v22.1/topology-patterns/topology_follow_the_workload_writes.gif deleted file mode 100644 index 6cd6be01196..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_follow_the_workload_writes.gif and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_follower_reads1.png b/src/current/images/v22.1/topology-patterns/topology_follower_reads1.png deleted file mode 100644 index 1eb07d53d6a..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_follower_reads1.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_follower_reads3.png b/src/current/images/v22.1/topology-patterns/topology_follower_reads3.png deleted file mode 100644 index d6a125c1079..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_follower_reads3.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_follower_reads_reads.png b/src/current/images/v22.1/topology-patterns/topology_follower_reads_reads.png deleted file mode 100644 index 47657b885b3..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_follower_reads_reads.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_follower_reads_resiliency.png b/src/current/images/v22.1/topology-patterns/topology_follower_reads_resiliency.png deleted file mode 100644 index 73868163a1e..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_follower_reads_resiliency.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_follower_reads_writes.gif b/src/current/images/v22.1/topology-patterns/topology_follower_reads_writes.gif deleted file mode 100644 index 8fc4b2c55b7..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_follower_reads_writes.gif and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_geo-partitioned_leaseholders1.png b/src/current/images/v22.1/topology-patterns/topology_geo-partitioned_leaseholders1.png deleted file mode 100644 index 66d03a7f113..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_geo-partitioned_leaseholders1.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_geo-partitioned_leaseholders_reads.png b/src/current/images/v22.1/topology-patterns/topology_geo-partitioned_leaseholders_reads.png deleted file mode 100644 index 0daa6665d05..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_geo-partitioned_leaseholders_reads.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_geo-partitioned_leaseholders_resiliency1.png b/src/current/images/v22.1/topology-patterns/topology_geo-partitioned_leaseholders_resiliency1.png deleted file mode 100644 index 09aaa95ded9..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_geo-partitioned_leaseholders_resiliency1.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_geo-partitioned_leaseholders_resiliency2.png b/src/current/images/v22.1/topology-patterns/topology_geo-partitioned_leaseholders_resiliency2.png deleted file mode 100644 index f372f14552c..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_geo-partitioned_leaseholders_resiliency2.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_geo-partitioned_leaseholders_writes.gif b/src/current/images/v22.1/topology-patterns/topology_geo-partitioned_leaseholders_writes.gif deleted file mode 100644 index f5c8d077818..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_geo-partitioned_leaseholders_writes.gif and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_geo-partitioning1.png b/src/current/images/v22.1/topology-patterns/topology_geo-partitioning1.png deleted file mode 100644 index a7bc25e6279..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_geo-partitioning1.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_geo-partitioning1_no-map.png b/src/current/images/v22.1/topology-patterns/topology_geo-partitioning1_no-map.png deleted file mode 100644 index 3b348dd7430..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_geo-partitioning1_no-map.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_geo-partitioning_reads.png b/src/current/images/v22.1/topology-patterns/topology_geo-partitioning_reads.png deleted file mode 100644 index 6dcdd7e418e..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_geo-partitioning_reads.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_geo-partitioning_resiliency1.png b/src/current/images/v22.1/topology-patterns/topology_geo-partitioning_resiliency1.png deleted file mode 100644 index d3353c2f8d0..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_geo-partitioning_resiliency1.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_geo-partitioning_resiliency2.png b/src/current/images/v22.1/topology-patterns/topology_geo-partitioning_resiliency2.png deleted file mode 100644 index 04191e8ddef..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_geo-partitioning_resiliency2.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_geo-partitioning_writes.gif b/src/current/images/v22.1/topology-patterns/topology_geo-partitioning_writes.gif deleted file mode 100644 index 11435a6bd51..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_geo-partitioning_writes.gif and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_multi-region_hardware.png b/src/current/images/v22.1/topology-patterns/topology_multi-region_hardware.png deleted file mode 100644 index dad856590d0..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_multi-region_hardware.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_pinned_index_leaseholders3.png b/src/current/images/v22.1/topology-patterns/topology_pinned_index_leaseholders3.png deleted file mode 100644 index 7d792d3a5ed..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_pinned_index_leaseholders3.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_single-region_cluster_resiliency1.png b/src/current/images/v22.1/topology-patterns/topology_single-region_cluster_resiliency1.png deleted file mode 100644 index 7fe13079fe0..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_single-region_cluster_resiliency1.png and /dev/null differ diff --git a/src/current/images/v22.1/topology-patterns/topology_single-region_cluster_resiliency2.png b/src/current/images/v22.1/topology-patterns/topology_single-region_cluster_resiliency2.png deleted file mode 100644 index f11c5989677..00000000000 Binary files a/src/current/images/v22.1/topology-patterns/topology_single-region_cluster_resiliency2.png and /dev/null differ diff --git a/src/current/images/v22.1/tpcc-large-replication-dashboard.png b/src/current/images/v22.1/tpcc-large-replication-dashboard.png deleted file mode 100644 index 05527878baa..00000000000 Binary files a/src/current/images/v22.1/tpcc-large-replication-dashboard.png and /dev/null differ diff --git a/src/current/images/v22.1/tpcc100k.png b/src/current/images/v22.1/tpcc100k.png deleted file mode 100644 index ab5cd6c532f..00000000000 Binary files a/src/current/images/v22.1/tpcc100k.png and /dev/null differ diff --git a/src/current/images/v22.1/tpcc13k.png b/src/current/images/v22.1/tpcc13k.png deleted file mode 100644 index 0e0a22190c5..00000000000 Binary files a/src/current/images/v22.1/tpcc13k.png and /dev/null differ diff --git a/src/current/images/v22.1/tpcc140k.png b/src/current/images/v22.1/tpcc140k.png deleted file mode 100644 index c13e1e24d7c..00000000000 Binary files a/src/current/images/v22.1/tpcc140k.png and /dev/null differ diff --git a/src/current/images/v22.1/tpcc50k.png b/src/current/images/v22.1/tpcc50k.png deleted file mode 100644 index c9086f59998..00000000000 Binary files a/src/current/images/v22.1/tpcc50k.png and /dev/null differ diff --git a/src/current/images/v22.1/trace.png b/src/current/images/v22.1/trace.png deleted file mode 100644 index 4f0fb98a753..00000000000 Binary files a/src/current/images/v22.1/trace.png and /dev/null differ diff --git a/src/current/images/v22.1/training-1.1.png b/src/current/images/v22.1/training-1.1.png deleted file mode 100644 index d1adf35bcde..00000000000 Binary files a/src/current/images/v22.1/training-1.1.png and /dev/null differ diff --git a/src/current/images/v22.1/training-1.2.png b/src/current/images/v22.1/training-1.2.png deleted file mode 100644 index 1993355b08e..00000000000 Binary files a/src/current/images/v22.1/training-1.2.png and /dev/null differ diff --git a/src/current/images/v22.1/training-1.png b/src/current/images/v22.1/training-1.png deleted file mode 100644 index 9f8de513337..00000000000 Binary files a/src/current/images/v22.1/training-1.png and /dev/null differ diff --git a/src/current/images/v22.1/training-10.png b/src/current/images/v22.1/training-10.png deleted file mode 100644 index b319a5bf490..00000000000 Binary files a/src/current/images/v22.1/training-10.png and /dev/null differ diff --git a/src/current/images/v22.1/training-11.png b/src/current/images/v22.1/training-11.png deleted file mode 100644 index 2af80764aaf..00000000000 Binary files a/src/current/images/v22.1/training-11.png and /dev/null differ diff --git a/src/current/images/v22.1/training-12.png b/src/current/images/v22.1/training-12.png deleted file mode 100644 index 7a8e4cd8e05..00000000000 Binary files a/src/current/images/v22.1/training-12.png and /dev/null differ diff --git a/src/current/images/v22.1/training-13.png b/src/current/images/v22.1/training-13.png deleted file mode 100644 index fc870143136..00000000000 Binary files a/src/current/images/v22.1/training-13.png and /dev/null differ diff --git a/src/current/images/v22.1/training-14.png b/src/current/images/v22.1/training-14.png deleted file mode 100644 index fe517518ed7..00000000000 Binary files a/src/current/images/v22.1/training-14.png and /dev/null differ diff --git a/src/current/images/v22.1/training-15.png b/src/current/images/v22.1/training-15.png deleted file mode 100644 index 1879ee29d2e..00000000000 Binary files a/src/current/images/v22.1/training-15.png and /dev/null differ diff --git a/src/current/images/v22.1/training-16.png b/src/current/images/v22.1/training-16.png deleted file mode 100644 index 24f6fa3d908..00000000000 Binary files a/src/current/images/v22.1/training-16.png and /dev/null differ diff --git a/src/current/images/v22.1/training-17.png b/src/current/images/v22.1/training-17.png deleted file mode 100644 index 9bb5c8a46dd..00000000000 Binary files a/src/current/images/v22.1/training-17.png and /dev/null differ diff --git a/src/current/images/v22.1/training-18.png b/src/current/images/v22.1/training-18.png deleted file mode 100644 index 8f0ae7aa857..00000000000 Binary files a/src/current/images/v22.1/training-18.png and /dev/null differ diff --git a/src/current/images/v22.1/training-19.png b/src/current/images/v22.1/training-19.png deleted file mode 100644 index e1a2414bf29..00000000000 Binary files a/src/current/images/v22.1/training-19.png and /dev/null differ diff --git a/src/current/images/v22.1/training-2.png b/src/current/images/v22.1/training-2.png deleted file mode 100644 index d6d8afd7828..00000000000 Binary files a/src/current/images/v22.1/training-2.png and /dev/null differ diff --git a/src/current/images/v22.1/training-20.png b/src/current/images/v22.1/training-20.png deleted file mode 100644 index d55c4f249ae..00000000000 Binary files a/src/current/images/v22.1/training-20.png and /dev/null differ diff --git a/src/current/images/v22.1/training-21.png b/src/current/images/v22.1/training-21.png deleted file mode 100644 index 5726c9c69a7..00000000000 Binary files a/src/current/images/v22.1/training-21.png and /dev/null differ diff --git a/src/current/images/v22.1/training-22.png b/src/current/images/v22.1/training-22.png deleted file mode 100644 index fe2ca336a95..00000000000 Binary files a/src/current/images/v22.1/training-22.png and /dev/null differ diff --git a/src/current/images/v22.1/training-23.png b/src/current/images/v22.1/training-23.png deleted file mode 100644 index de87538279f..00000000000 Binary files a/src/current/images/v22.1/training-23.png and /dev/null differ diff --git a/src/current/images/v22.1/training-3.png b/src/current/images/v22.1/training-3.png deleted file mode 100644 index 02b5724da59..00000000000 Binary files a/src/current/images/v22.1/training-3.png and /dev/null differ diff --git a/src/current/images/v22.1/training-4.png b/src/current/images/v22.1/training-4.png deleted file mode 100644 index ae55051e60e..00000000000 Binary files a/src/current/images/v22.1/training-4.png and /dev/null differ diff --git a/src/current/images/v22.1/training-5.png b/src/current/images/v22.1/training-5.png deleted file mode 100644 index 65c805404c4..00000000000 Binary files a/src/current/images/v22.1/training-5.png and /dev/null differ diff --git a/src/current/images/v22.1/training-6.1.png b/src/current/images/v22.1/training-6.1.png deleted file mode 100644 index 128ab631ce8..00000000000 Binary files a/src/current/images/v22.1/training-6.1.png and /dev/null differ diff --git a/src/current/images/v22.1/training-6.png b/src/current/images/v22.1/training-6.png deleted file mode 100644 index 8d93f4c3e3d..00000000000 Binary files a/src/current/images/v22.1/training-6.png and /dev/null differ diff --git a/src/current/images/v22.1/training-7.png b/src/current/images/v22.1/training-7.png deleted file mode 100644 index 46179bfd04b..00000000000 Binary files a/src/current/images/v22.1/training-7.png and /dev/null differ diff --git a/src/current/images/v22.1/training-8.png b/src/current/images/v22.1/training-8.png deleted file mode 100644 index d31f2e95a29..00000000000 Binary files a/src/current/images/v22.1/training-8.png and /dev/null differ diff --git a/src/current/images/v22.1/training-9.png b/src/current/images/v22.1/training-9.png deleted file mode 100644 index f386b9a9aa7..00000000000 Binary files a/src/current/images/v22.1/training-9.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-cluster-overview-panel.png b/src/current/images/v22.1/ui-cluster-overview-panel.png deleted file mode 100644 index 3d1463f88b7..00000000000 Binary files a/src/current/images/v22.1/ui-cluster-overview-panel.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-custom-chart-debug-00.png b/src/current/images/v22.1/ui-custom-chart-debug-00.png deleted file mode 100644 index d45c4f017db..00000000000 Binary files a/src/current/images/v22.1/ui-custom-chart-debug-00.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-custom-chart-debug-01.png b/src/current/images/v22.1/ui-custom-chart-debug-01.png deleted file mode 100644 index 0ac72022ffc..00000000000 Binary files a/src/current/images/v22.1/ui-custom-chart-debug-01.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-decommissioned-nodes.png b/src/current/images/v22.1/ui-decommissioned-nodes.png deleted file mode 100644 index ecd4a1788aa..00000000000 Binary files a/src/current/images/v22.1/ui-decommissioned-nodes.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-download-button.png b/src/current/images/v22.1/ui-download-button.png deleted file mode 100644 index 8186e20a906..00000000000 Binary files a/src/current/images/v22.1/ui-download-button.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-localities-debug.png b/src/current/images/v22.1/ui-localities-debug.png deleted file mode 100644 index 78fc2a24982..00000000000 Binary files a/src/current/images/v22.1/ui-localities-debug.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-node-components.png b/src/current/images/v22.1/ui-node-components.png deleted file mode 100644 index 810a14d246f..00000000000 Binary files a/src/current/images/v22.1/ui-node-components.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-node-list.png b/src/current/images/v22.1/ui-node-list.png deleted file mode 100644 index 5467fcd43ec..00000000000 Binary files a/src/current/images/v22.1/ui-node-list.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-node-map-after-license.png b/src/current/images/v22.1/ui-node-map-after-license.png deleted file mode 100644 index 8cd6957292d..00000000000 Binary files a/src/current/images/v22.1/ui-node-map-after-license.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-node-map-before-license.png b/src/current/images/v22.1/ui-node-map-before-license.png deleted file mode 100644 index 4d24c64a9db..00000000000 Binary files a/src/current/images/v22.1/ui-node-map-before-license.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-node-map-complete.png b/src/current/images/v22.1/ui-node-map-complete.png deleted file mode 100644 index 5af98793177..00000000000 Binary files a/src/current/images/v22.1/ui-node-map-complete.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-node-map-navigation1.png b/src/current/images/v22.1/ui-node-map-navigation1.png deleted file mode 100644 index 3bc501f2caf..00000000000 Binary files a/src/current/images/v22.1/ui-node-map-navigation1.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-node-map-navigation2.png b/src/current/images/v22.1/ui-node-map-navigation2.png deleted file mode 100644 index 270e118345e..00000000000 Binary files a/src/current/images/v22.1/ui-node-map-navigation2.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-node-map-navigation3.png b/src/current/images/v22.1/ui-node-map-navigation3.png deleted file mode 100644 index 7737afdbea1..00000000000 Binary files a/src/current/images/v22.1/ui-node-map-navigation3.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-node-map.png b/src/current/images/v22.1/ui-node-map.png deleted file mode 100644 index 760d0108c49..00000000000 Binary files a/src/current/images/v22.1/ui-node-map.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-region-component.png b/src/current/images/v22.1/ui-region-component.png deleted file mode 100644 index 7a9bcbd5b58..00000000000 Binary files a/src/current/images/v22.1/ui-region-component.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-session-filter.png b/src/current/images/v22.1/ui-session-filter.png deleted file mode 100644 index ce4dcd69efe..00000000000 Binary files a/src/current/images/v22.1/ui-session-filter.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-sessions-details-page.png b/src/current/images/v22.1/ui-sessions-details-page.png deleted file mode 100644 index 41383389b80..00000000000 Binary files a/src/current/images/v22.1/ui-sessions-details-page.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-sessions-page.png b/src/current/images/v22.1/ui-sessions-page.png deleted file mode 100644 index b95a48b27d5..00000000000 Binary files a/src/current/images/v22.1/ui-sessions-page.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-single-node.gif b/src/current/images/v22.1/ui-single-node.gif deleted file mode 100644 index f60d25b0e2a..00000000000 Binary files a/src/current/images/v22.1/ui-single-node.gif and /dev/null differ diff --git a/src/current/images/v22.1/ui-statement-contention.png b/src/current/images/v22.1/ui-statement-contention.png deleted file mode 100644 index 0f4dd29880e..00000000000 Binary files a/src/current/images/v22.1/ui-statement-contention.png and /dev/null differ diff --git a/src/current/images/v22.1/ui-time-range.gif b/src/current/images/v22.1/ui-time-range.gif deleted file mode 100644 index f3d5d7ca2ca..00000000000 Binary files a/src/current/images/v22.1/ui-time-range.gif and /dev/null differ diff --git a/src/current/images/v22.1/ui_activate_diagnostics.png b/src/current/images/v22.1/ui_activate_diagnostics.png deleted file mode 100644 index fcaafbd2bad..00000000000 Binary files a/src/current/images/v22.1/ui_activate_diagnostics.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_activate_diagnostics_dialog.png b/src/current/images/v22.1/ui_activate_diagnostics_dialog.png deleted file mode 100644 index 19b7aa1c229..00000000000 Binary files a/src/current/images/v22.1/ui_activate_diagnostics_dialog.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_available_disk_capacity.png b/src/current/images/v22.1/ui_available_disk_capacity.png deleted file mode 100644 index 7ee4c2c5359..00000000000 Binary files a/src/current/images/v22.1/ui_available_disk_capacity.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_capacity.png b/src/current/images/v22.1/ui_capacity.png deleted file mode 100644 index 1e9085851af..00000000000 Binary files a/src/current/images/v22.1/ui_capacity.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_changefeed_restarts.png b/src/current/images/v22.1/ui_changefeed_restarts.png deleted file mode 100644 index 7989e71fcb6..00000000000 Binary files a/src/current/images/v22.1/ui_changefeed_restarts.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_clock_offset.png b/src/current/images/v22.1/ui_clock_offset.png deleted file mode 100644 index 2f4b3051282..00000000000 Binary files a/src/current/images/v22.1/ui_clock_offset.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_cluster_overview_3_nodes.png b/src/current/images/v22.1/ui_cluster_overview_3_nodes.png deleted file mode 100644 index 1b59a49bd73..00000000000 Binary files a/src/current/images/v22.1/ui_cluster_overview_3_nodes.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_cluster_overview_5_nodes.png b/src/current/images/v22.1/ui_cluster_overview_5_nodes.png deleted file mode 100644 index ff7b5ab5518..00000000000 Binary files a/src/current/images/v22.1/ui_cluster_overview_5_nodes.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_cpu_percent.png b/src/current/images/v22.1/ui_cpu_percent.png deleted file mode 100644 index dae468b6d6f..00000000000 Binary files a/src/current/images/v22.1/ui_cpu_percent.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_cpu_time.png b/src/current/images/v22.1/ui_cpu_time.png deleted file mode 100644 index 3e81817ca38..00000000000 Binary files a/src/current/images/v22.1/ui_cpu_time.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_database_grants_view.png b/src/current/images/v22.1/ui_database_grants_view.png deleted file mode 100644 index c21145da9f9..00000000000 Binary files a/src/current/images/v22.1/ui_database_grants_view.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_database_tables_details.png b/src/current/images/v22.1/ui_database_tables_details.png deleted file mode 100644 index ec640e6ebbd..00000000000 Binary files a/src/current/images/v22.1/ui_database_tables_details.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_database_tables_view.png b/src/current/images/v22.1/ui_database_tables_view.png deleted file mode 100644 index 7dd2eba7eba..00000000000 Binary files a/src/current/images/v22.1/ui_database_tables_view.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_disk_iops.png b/src/current/images/v22.1/ui_disk_iops.png deleted file mode 100644 index f0f553547e3..00000000000 Binary files a/src/current/images/v22.1/ui_disk_iops.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_disk_read_bytes.png b/src/current/images/v22.1/ui_disk_read_bytes.png deleted file mode 100644 index 15bcb584f55..00000000000 Binary files a/src/current/images/v22.1/ui_disk_read_bytes.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_disk_read_ops.png b/src/current/images/v22.1/ui_disk_read_ops.png deleted file mode 100644 index 55b356f84ec..00000000000 Binary files a/src/current/images/v22.1/ui_disk_read_ops.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_disk_read_time.png b/src/current/images/v22.1/ui_disk_read_time.png deleted file mode 100644 index fd340744135..00000000000 Binary files a/src/current/images/v22.1/ui_disk_read_time.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_disk_write_bytes.png b/src/current/images/v22.1/ui_disk_write_bytes.png deleted file mode 100644 index e3fd5fccdad..00000000000 Binary files a/src/current/images/v22.1/ui_disk_write_bytes.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_disk_write_ops.png b/src/current/images/v22.1/ui_disk_write_ops.png deleted file mode 100644 index 9e493d69f88..00000000000 Binary files a/src/current/images/v22.1/ui_disk_write_ops.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_disk_write_time.png b/src/current/images/v22.1/ui_disk_write_time.png deleted file mode 100644 index 3cd023ffd40..00000000000 Binary files a/src/current/images/v22.1/ui_disk_write_time.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_events.png b/src/current/images/v22.1/ui_events.png deleted file mode 100644 index c8a948ce9aa..00000000000 Binary files a/src/current/images/v22.1/ui_events.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_file_descriptors.png b/src/current/images/v22.1/ui_file_descriptors.png deleted file mode 100644 index 42187c9878d..00000000000 Binary files a/src/current/images/v22.1/ui_file_descriptors.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_hovering.gif b/src/current/images/v22.1/ui_hovering.gif deleted file mode 100644 index 1795471051f..00000000000 Binary files a/src/current/images/v22.1/ui_hovering.gif and /dev/null differ diff --git a/src/current/images/v22.1/ui_jobs_page.png b/src/current/images/v22.1/ui_jobs_page.png deleted file mode 100644 index eb802e49647..00000000000 Binary files a/src/current/images/v22.1/ui_jobs_page.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_jobs_page_details.png b/src/current/images/v22.1/ui_jobs_page_details.png deleted file mode 100644 index d6cfab59fa8..00000000000 Binary files a/src/current/images/v22.1/ui_jobs_page_details.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_live_bytes.png b/src/current/images/v22.1/ui_live_bytes.png deleted file mode 100644 index 98980af23ed..00000000000 Binary files a/src/current/images/v22.1/ui_live_bytes.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_logical_bytes_per_store.png b/src/current/images/v22.1/ui_logical_bytes_per_store.png deleted file mode 100644 index 75ee063d6e0..00000000000 Binary files a/src/current/images/v22.1/ui_logical_bytes_per_store.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_login_sso.png b/src/current/images/v22.1/ui_login_sso.png deleted file mode 100644 index 0ebe284ae3f..00000000000 Binary files a/src/current/images/v22.1/ui_login_sso.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_max_changefeed.png b/src/current/images/v22.1/ui_max_changefeed.png deleted file mode 100644 index 7fb0eba3d8a..00000000000 Binary files a/src/current/images/v22.1/ui_max_changefeed.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_memory_usage.png b/src/current/images/v22.1/ui_memory_usage.png deleted file mode 100644 index ffc2c515616..00000000000 Binary files a/src/current/images/v22.1/ui_memory_usage.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_memory_usage_new.png b/src/current/images/v22.1/ui_memory_usage_new.png deleted file mode 100644 index 97ae93e1b8e..00000000000 Binary files a/src/current/images/v22.1/ui_memory_usage_new.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_network_bytes_received.png b/src/current/images/v22.1/ui_network_bytes_received.png deleted file mode 100644 index e9a274dc793..00000000000 Binary files a/src/current/images/v22.1/ui_network_bytes_received.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_network_bytes_sent.png b/src/current/images/v22.1/ui_network_bytes_sent.png deleted file mode 100644 index 2eb35a43222..00000000000 Binary files a/src/current/images/v22.1/ui_network_bytes_sent.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_network_latency_collapsed_nodes.png b/src/current/images/v22.1/ui_network_latency_collapsed_nodes.png deleted file mode 100644 index 22b243bfba1..00000000000 Binary files a/src/current/images/v22.1/ui_network_latency_collapsed_nodes.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_network_latency_matrix.png b/src/current/images/v22.1/ui_network_latency_matrix.png deleted file mode 100644 index 6b322d3f1ee..00000000000 Binary files a/src/current/images/v22.1/ui_network_latency_matrix.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_node_count.png b/src/current/images/v22.1/ui_node_count.png deleted file mode 100644 index d5c103fc868..00000000000 Binary files a/src/current/images/v22.1/ui_node_count.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_nodes_page.png b/src/current/images/v22.1/ui_nodes_page.png deleted file mode 100644 index 495ff14eea0..00000000000 Binary files a/src/current/images/v22.1/ui_nodes_page.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_overview_dashboard.png b/src/current/images/v22.1/ui_overview_dashboard.png deleted file mode 100644 index c2adcbf0c83..00000000000 Binary files a/src/current/images/v22.1/ui_overview_dashboard.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_overview_dashboard_1_suspect.png b/src/current/images/v22.1/ui_overview_dashboard_1_suspect.png deleted file mode 100644 index ac1712da7d1..00000000000 Binary files a/src/current/images/v22.1/ui_overview_dashboard_1_suspect.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_overview_dashboard_3_nodes.png b/src/current/images/v22.1/ui_overview_dashboard_3_nodes.png deleted file mode 100644 index 2928edc8ab1..00000000000 Binary files a/src/current/images/v22.1/ui_overview_dashboard_3_nodes.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_plan_table.png b/src/current/images/v22.1/ui_plan_table.png deleted file mode 100644 index aef67627d02..00000000000 Binary files a/src/current/images/v22.1/ui_plan_table.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_ranges.png b/src/current/images/v22.1/ui_ranges.png deleted file mode 100644 index 316186bb4a3..00000000000 Binary files a/src/current/images/v22.1/ui_ranges.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_replica_circuitbreaker_events.png b/src/current/images/v22.1/ui_replica_circuitbreaker_events.png deleted file mode 100644 index c1b054330ce..00000000000 Binary files a/src/current/images/v22.1/ui_replica_circuitbreaker_events.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_replica_circuitbreaker_replicas.png b/src/current/images/v22.1/ui_replica_circuitbreaker_replicas.png deleted file mode 100644 index 4f82f5d19a2..00000000000 Binary files a/src/current/images/v22.1/ui_replica_circuitbreaker_replicas.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_replica_quiescence.png b/src/current/images/v22.1/ui_replica_quiescence.png deleted file mode 100644 index 663dbfb097e..00000000000 Binary files a/src/current/images/v22.1/ui_replica_quiescence.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_replica_snapshots.png b/src/current/images/v22.1/ui_replica_snapshots.png deleted file mode 100644 index 177d8f571ba..00000000000 Binary files a/src/current/images/v22.1/ui_replica_snapshots.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_replica_snapshots_data.png b/src/current/images/v22.1/ui_replica_snapshots_data.png deleted file mode 100644 index 655c88ab516..00000000000 Binary files a/src/current/images/v22.1/ui_replica_snapshots_data.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_replicas_migration.png b/src/current/images/v22.1/ui_replicas_migration.png deleted file mode 100644 index 6e08c5a3a5b..00000000000 Binary files a/src/current/images/v22.1/ui_replicas_migration.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_replicas_migration2.png b/src/current/images/v22.1/ui_replicas_migration2.png deleted file mode 100644 index f7183689f20..00000000000 Binary files a/src/current/images/v22.1/ui_replicas_migration2.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_replicas_migration3.png b/src/current/images/v22.1/ui_replicas_migration3.png deleted file mode 100644 index b7d9fd39760..00000000000 Binary files a/src/current/images/v22.1/ui_replicas_migration3.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_replicas_per_node.png b/src/current/images/v22.1/ui_replicas_per_node.png deleted file mode 100644 index a6a662c6f32..00000000000 Binary files a/src/current/images/v22.1/ui_replicas_per_node.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_replicas_per_store.png b/src/current/images/v22.1/ui_replicas_per_store.png deleted file mode 100644 index 2036c392fc8..00000000000 Binary files a/src/current/images/v22.1/ui_replicas_per_store.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_sink_byte_traffic.png b/src/current/images/v22.1/ui_sink_byte_traffic.png deleted file mode 100644 index 4bb61c4e83d..00000000000 Binary files a/src/current/images/v22.1/ui_sink_byte_traffic.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_sink_counts.png b/src/current/images/v22.1/ui_sink_counts.png deleted file mode 100644 index dc8a6690cbf..00000000000 Binary files a/src/current/images/v22.1/ui_sink_counts.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_sink_timings.png b/src/current/images/v22.1/ui_sink_timings.png deleted file mode 100644 index 63f5de2be4a..00000000000 Binary files a/src/current/images/v22.1/ui_sink_timings.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_statement_diagnostics.png b/src/current/images/v22.1/ui_statement_diagnostics.png deleted file mode 100644 index 173c0c1e062..00000000000 Binary files a/src/current/images/v22.1/ui_statement_diagnostics.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_statement_fingerprint_charts.png b/src/current/images/v22.1/ui_statement_fingerprint_charts.png deleted file mode 100644 index c4aa3249898..00000000000 Binary files a/src/current/images/v22.1/ui_statement_fingerprint_charts.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_statement_fingerprint_overview.png b/src/current/images/v22.1/ui_statement_fingerprint_overview.png deleted file mode 100644 index 04dd32ededb..00000000000 Binary files a/src/current/images/v22.1/ui_statement_fingerprint_overview.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_statement_plan.png b/src/current/images/v22.1/ui_statement_plan.png deleted file mode 100644 index 2739ae86371..00000000000 Binary files a/src/current/images/v22.1/ui_statement_plan.png and /dev/null differ diff --git a/src/current/images/v22.1/ui_transaction_latency.png b/src/current/images/v22.1/ui_transaction_latency.png deleted file mode 100644 index 71db3630470..00000000000 Binary files a/src/current/images/v22.1/ui_transaction_latency.png and /dev/null differ diff --git a/src/current/images/v22.1/window-functions.png b/src/current/images/v22.1/window-functions.png deleted file mode 100644 index 887ceeac669..00000000000 Binary files a/src/current/images/v22.1/window-functions.png and /dev/null differ diff --git a/src/current/releases/v22.1.md b/src/current/releases/v22.1.md index a0d3f9070f4..e704a8aa4ab 100644 --- a/src/current/releases/v22.1.md +++ b/src/current/releases/v22.1.md @@ -5,24 +5,42 @@ toc_not_nested: true summary: Additions and changes in CockroachDB version v22.1 since version v21.2 major_version: v22.1 docs_area: releases -keywords: gin, gin index, gin indexes, inverted index, inverted indexes, accelerated index, accelerated indexes --- -{% assign rel = site.data.releases | where_exp: "rel", "rel.major_version == page.major_version" | sort: "release_date" | reverse %} + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + -{% assign vers = site.data.versions | where_exp: "vers", "vers.major_version == page.major_version" | first %} - -{% if rel and vers %} -{% assign today = "today" | date: "%Y-%m-%d" %} - -{% include releases/testing-release-notice.md major_version=vers %} - -{% include releases/whats-new-intro.md major_version=vers %} - -{% for r in rel %} -{% include releases/{{ page.major_version }}/{{ r.release_name }}.md release=r.release_name %} -{% endfor %} - -{% else %} -No releases are available for this version. See the [Releases](https://www.cockroachlabs.com/docs/releases) page for all available releases. -{% endif %} +This release is no longer supported. For more details, refer to the [Release Support Policy](../releases/release-support-policy.html). For a full list of supported releases, see the [Releases page](../releases/). \ No newline at end of file diff --git a/src/current/v22.1/404.md b/src/current/v22.1/404.md deleted file mode 100644 index 49fe98824f2..00000000000 --- a/src/current/v22.1/404.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -description: "Page not found." -sitemap: false -search: exclude -related_pages: none -toc: false -contribute: false -feedback: false -docs_area: 404 ---- - -
-

Whoops!

- -

We cannot find the page you are looking for. You may have typed the wrong address or found a broken link.

-
diff --git a/src/current/v22.1/add-column.md b/src/current/v22.1/add-column.md deleted file mode 100644 index 047750796c9..00000000000 --- a/src/current/v22.1/add-column.md +++ /dev/null @@ -1,389 +0,0 @@ ---- -title: ADD COLUMN -summary: Use the ADD COLUMN statement to add columns to tables. -toc: true -docs_area: reference.sql ---- - -`ADD COLUMN` is a subcommand of [`ALTER TABLE`](alter-table.html). Use `ADD COLUMN` to add columns to existing tables. - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -{% include {{ page.version.version }}/sql/combine-alter-table-commands.md %} - -## Synopsis - -
-{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/add_column.html %} -
- - -## Required privileges - -The user must have the `CREATE` [privilege](security-reference/authorization.html#managing-privileges) on the table. - -## Parameters - - Parameter | Description ------------|------------- - `table_name` | The name of the table to which you want to add the column. - `column_name` | The name of the column you want to add. The column name must follow these [identifier rules](keywords-and-identifiers.html#identifiers) and must be unique within the table but can have the same name as indexes or constraints. - `typename` | The [data type](data-types.html) of the new column. - `col_qualification` | An optional list of [column qualifications](create-table.html#column-qualifications). - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -### Setup - -The following examples use the [`bank` demo database schema](cockroach-demo.html#datasets). - -To follow along, run [`cockroach demo bank`](cockroach-demo.html) to start a temporary, in-memory cluster with the `bank` schema and dataset preloaded: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo bank -~~~ - -### Add a single column - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE bank ADD COLUMN active BOOL; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM bank; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden ---------------+-----------+-------------+----------------+-----------------------+-------------+------------ - id | INT8 | false | NULL | | {bank_pkey} | false - balance | INT8 | true | NULL | | {bank_pkey} | false - payload | STRING | true | NULL | | {bank_pkey} | false - active | BOOL | true | NULL | | {bank_pkey} | false -(4 rows) -~~~ - -### Add multiple columns - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE bank ADD COLUMN location STRING, ADD COLUMN currency STRING; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM bank; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden ---------------+-----------+-------------+----------------+-----------------------+-------------+------------ - id | INT8 | false | NULL | | {bank_pkey} | false - balance | INT8 | true | NULL | | {bank_pkey} | false - payload | STRING | true | NULL | | {bank_pkey} | false - active | BOOL | true | NULL | | {bank_pkey} | false - location | STRING | true | NULL | | {bank_pkey} | false - currency | STRING | true | NULL | | {bank_pkey} | false -(6 rows) -~~~ - -### Add a column with a `NOT NULL` constraint and a `DEFAULT` value - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE bank ADD COLUMN interest DECIMAL NOT NULL DEFAULT (DECIMAL '1.3'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM bank; -~~~ -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden ---------------+-----------+-------------+------------------------+-----------------------+-------------+------------ - id | INT8 | false | NULL | | {bank_pkey} | false - balance | INT8 | true | NULL | | {bank_pkey} | false - payload | STRING | true | NULL | | {bank_pkey} | false - active | BOOL | true | NULL | | {bank_pkey} | false - location | STRING | true | NULL | | {bank_pkey} | false - currency | STRING | true | NULL | | {bank_pkey} | false - interest | DECIMAL | false | 1.3:::DECIMAL::DECIMAL | | {bank_pkey} | false -(7 rows) -~~~ - -### Add a column with a `UNIQUE` constraint - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE bank ADD COLUMN address STRING UNIQUE; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM bank; -~~~ -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden ---------------+-----------+-------------+----------------+-----------------------+------------------------------+------------ - id | INT8 | false | NULL | | {bank_address_key,bank_pkey} | false - balance | INT8 | true | NULL | | {bank_pkey} | false - payload | STRING | true | NULL | | {bank_pkey} | false - active | BOOL | true | NULL | | {bank_pkey} | false - location | STRING | true | NULL | | {bank_pkey} | false - currency | STRING | true | NULL | | {bank_pkey} | false - interest | DECIMAL | false | 1.3:::DECIMAL | | {bank_pkey} | false - address | STRING | true | NULL | | {bank_address_key,bank_pkey} | false -(8 rows) -~~~ - -### Add a column with a `FOREIGN KEY` constraint - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers ( - id INT PRIMARY KEY, - name STRING -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE bank ADD COLUMN cust_number INT REFERENCES customers(id); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM bank; -~~~ -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden ---------------+-----------+-------------+----------------+-----------------------+------------------------------+------------ - id | INT8 | false | NULL | | {bank_address_key,bank_pkey} | false - balance | INT8 | true | NULL | | {bank_pkey} | false - payload | STRING | true | NULL | | {bank_pkey} | false - active | BOOL | true | NULL | | {bank_pkey} | false - location | STRING | true | NULL | | {bank_pkey} | false - currency | STRING | true | NULL | | {bank_pkey} | false - interest | DECIMAL | false | 1.3:::DECIMAL | | {bank_pkey} | false - address | STRING | true | NULL | | {bank_address_key,bank_pkey} | false - cust_number | INT8 | true | NULL | | {bank_pkey} | false - -(9 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CONSTRAINTS FROM bank; -~~~ -~~~ - table_name | constraint_name | constraint_type | details | validated --------------+-----------------------+-----------------+----------------------------------------------------+------------ - bank | bank_address_key | UNIQUE | UNIQUE (address ASC) | t - bank | bank_cust_number_fkey | FOREIGN KEY | FOREIGN KEY (cust_number) REFERENCES customers(id) | t - bank | bank_pkey | PRIMARY KEY | PRIMARY KEY (id ASC) | t -(3 rows) -~~~ - -### Add a column with collation - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE bank ADD COLUMN more_names STRING COLLATE en; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM bank; -~~~ -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden ---------------+-------------------+-------------+----------------+-----------------------+------------------------------+------------ - id | INT8 | false | NULL | | {bank_address_key,bank_pkey} | false - balance | INT8 | true | NULL | | {bank_pkey} | false - payload | STRING | true | NULL | | {bank_pkey} | false - active | BOOL | true | NULL | | {bank_pkey} | false - location | STRING | true | NULL | | {bank_pkey} | false - currency | STRING | true | NULL | | {bank_pkey} | false - interest | DECIMAL | false | 1.3:::DECIMAL | | {bank_pkey} | false - address | STRING | true | NULL | | {bank_address_key,bank_pkey} | false - cust_number | INT8 | true | NULL | | {bank_pkey} | false - more_names | STRING COLLATE en | true | NULL | | {bank_pkey} | false -(10 rows) -~~~ - -### Add a column and assign it to a column family - -#### Add a column and assign it to a new column family - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE bank ADD COLUMN location1 STRING CREATE FAMILY f1; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE bank; -~~~ -~~~ - table_name | create_statement --------------+-------------------------------------------------------------------------------------------------------------------------------------- - bank | CREATE TABLE bank ( - | id INT8 NOT NULL, - | balance INT8 NULL, - | payload STRING NULL, - | active BOOL NULL, - | location STRING NULL, - | currency STRING NULL, - | interest DECIMAL NOT NULL DEFAULT 1.3:::DECIMAL, - | address STRING NULL, - | cust_number INT8 NULL, - | more_names STRING COLLATE en NULL, - | location1 STRING NULL, - | CONSTRAINT bank_pkey PRIMARY KEY (id ASC), - | CONSTRAINT fk_cust_number_ref_customers FOREIGN KEY (cust_number) REFERENCES customers(id), - | UNIQUE INDEX bank_address_key (address ASC), - | FAMILY fam_0_id_balance_payload (id, balance, payload, active, location, currency, interest, address, cust_number, more_names), - | FAMILY f1 (location1) - | ) -(1 row) -~~~ - -#### Add a column and assign it to an existing column family - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE bank ADD COLUMN location2 STRING FAMILY f1; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE bank; -~~~ -~~~ - table_name | create_statement --------------+-------------------------------------------------------------------------------------------------------------------------------------- - bank | CREATE TABLE bank ( - | id INT8 NOT NULL, - | balance INT8 NULL, - | payload STRING NULL, - | active BOOL NULL, - | location STRING NULL, - | currency STRING NULL, - | interest DECIMAL NOT NULL DEFAULT 1.3:::DECIMAL, - | address STRING NULL, - | cust_number INT8 NULL, - | more_names STRING COLLATE en NULL, - | location1 STRING NULL, - | location2 STRING NULL, - | CONSTRAINT bank_pkey PRIMARY KEY (id ASC), - | CONSTRAINT fk_cust_number_ref_customers FOREIGN KEY (cust_number) REFERENCES customers(id), - | UNIQUE INDEX bank_address_key (address ASC), - | FAMILY fam_0_id_balance_payload (id, balance, payload, active, location, currency, interest, address, cust_number, more_names), - | FAMILY f1 (location1, location2) - | ) -(1 row) -~~~ - -#### Add a column and create a new column family if column family does not exist - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE bank ADD COLUMN new_name STRING CREATE IF NOT EXISTS FAMILY f2; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE bank; -~~~ -~~~ - table_name | create_statement --------------+-------------------------------------------------------------------------------------------------------------------------------------- - bank | CREATE TABLE bank ( - | id INT8 NOT NULL, - | balance INT8 NULL, - | payload STRING NULL, - | active BOOL NULL, - | location STRING NULL, - | currency STRING NULL, - | interest DECIMAL NOT NULL DEFAULT 1.3:::DECIMAL, - | address STRING NULL, - | cust_number INT8 NULL, - | more_names STRING COLLATE en NULL, - | location1 STRING NULL, - | location2 STRING NULL, - | new_name STRING NULL, - | CONSTRAINT bank_pkey PRIMARY KEY (id ASC), - | CONSTRAINT fk_cust_number_ref_customers FOREIGN KEY (cust_number) REFERENCES customers(id), - | UNIQUE INDEX bank_address_key (address ASC), - | FAMILY fam_0_id_balance_payload (id, balance, payload, active, location, currency, interest, address, cust_number, more_names), - | FAMILY f1 (location1, location2), - | FAMILY f2 (new_name) - | ) -(1 row) -~~~ - -### Add a column with an `ON UPDATE` expression - - `ON UPDATE` expressions set the value for a column when other values in a row are updated. - -For example, suppose you add a new column to the `bank` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE bank ADD COLUMN last_updated TIMESTAMPTZ DEFAULT now() ON UPDATE now(); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT id, balance, last_updated FROM bank LIMIT 5; -~~~ - -~~~ - id | balance | last_updated ------+---------+-------------------------------- - 0 | 0 | 2021-10-21 17:03:41.213557+00 - 1 | 0 | 2021-10-21 17:03:41.213557+00 - 2 | 0 | 2021-10-21 17:03:41.213557+00 - 3 | 0 | 2021-10-21 17:03:41.213557+00 - 4 | 0 | 2021-10-21 17:03:41.213557+00 -(5 rows) -~~~ - -When any value in any row of the `bank` table is updated, CockroachDB re-evaluates the `ON UPDATE` expression and updates the `last_updated` column with the result. - -{% include_cached copy-clipboard.html %} -~~~ sql -> UPDATE bank SET balance = 500 WHERE id = 0; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT id, balance, last_updated FROM bank LIMIT 5; -~~~ - -~~~ - id | balance | last_updated ------+---------+-------------------------------- - 0 | 500 | 2021-10-21 17:06:42.211261+00 - 1 | 0 | 2021-10-21 17:03:41.213557+00 - 2 | 0 | 2021-10-21 17:03:41.213557+00 - 3 | 0 | 2021-10-21 17:03:41.213557+00 - 4 | 0 | 2021-10-21 17:03:41.213557+00 -(5 rows) -~~~ - -## See also - -- [`ALTER TABLE`](alter-table.html) -- [Column-level Constraints](constraints.html) -- [Collation](collate.html) -- [Column Families](column-families.html) -- [`SHOW JOBS`](show-jobs.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v22.1/add-constraint.md b/src/current/v22.1/add-constraint.md deleted file mode 100644 index 68b606f4eea..00000000000 --- a/src/current/v22.1/add-constraint.md +++ /dev/null @@ -1,574 +0,0 @@ ---- -title: ADD CONSTRAINT -summary: Use the ADD CONSTRAINT statement to add constraints to columns. -toc: true -docs_area: reference.sql ---- - -The `ADD CONSTRAINT` [statement](sql-statements.html) is part of `ALTER TABLE` and can add the following [constraints](constraints.html) to columns: - -- [`UNIQUE`](#add-the-unique-constraint) -- [`CHECK`](#add-the-check-constraint) -- [`FOREIGN KEY`](#add-the-foreign-key-constraint-with-cascade) - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -To add a primary key constraint to a table, you should explicitly define the primary key at [table creation](create-table.html). To replace an existing primary key, you can use `ADD CONSTRAINT ... PRIMARY KEY`. For details, see [Changing primary keys with `ADD CONSTRAINT ... PRIMARY KEY`](#changing-primary-keys-with-add-constraint-primary-key). - -The [`DEFAULT`](default-value.html) and [`NOT NULL`](not-null.html) constraints are managed through [`ALTER COLUMN`](alter-column.html). - -{% include {{ page.version.version }}/sql/combine-alter-table-commands.md %} - -## Synopsis - -
-{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/add_constraint.html %} -
- -## Required privileges - -The user must have the `CREATE` [privilege](security-reference/authorization.html#managing-privileges) on the table. - -## Parameters - - Parameter | Description ------------|------------- - `table_name` | The name of the table containing the column you want to constrain. - `constraint_name` | The name of the constraint, which must be unique to its table and follow these [identifier rules](keywords-and-identifiers.html#identifiers). - `constraint_elem` | The [`CHECK`](check.html), [foreign key](foreign-key.html), [`UNIQUE`](unique.html) constraint you want to add.

Adding/changing a `DEFAULT` constraint is done through [`ALTER COLUMN`](alter-column.html).

Adding/changing the table's `PRIMARY KEY` is not supported through `ALTER TABLE`; it can only be specified during [table creation](create-table.html). - -## View schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Changing primary keys with `ADD CONSTRAINT ... PRIMARY KEY` - -When you change a primary key with [`ALTER TABLE ... ALTER PRIMARY KEY`](alter-primary-key.html), the existing primary key index becomes a secondary index. The secondary index created by `ALTER PRIMARY KEY` takes up node memory and can slow down write performance to a cluster. If you do not have queries that filter on the primary key that you are replacing, you can use `ADD CONSTRAINT` to replace the existing primary index without creating a secondary index. - -You can use `ADD CONSTRAINT ... PRIMARY KEY` to add a primary key to an existing table if one of the following is true: - -- No primary key was explicitly defined at [table creation](create-table.html). In this case, the table is created with a default [primary key on `rowid`](indexes.html#creation). Using `ADD CONSTRAINT ... PRIMARY KEY` drops the default primary key and replaces it with a new primary key. -- A [`DROP CONSTRAINT`](drop-constraint.html) statement precedes the `ADD CONSTRAINT ... PRIMARY KEY` statement, in the same transaction. For an example, see [Drop and add the primary key constraint](#drop-and-add-a-primary-key-constraint). - -{{site.data.alerts.callout_info}} -`ALTER TABLE ... ADD PRIMARY KEY` is an alias for `ALTER TABLE ... ADD CONSTRAINT ... PRIMARY KEY`. -{{site.data.alerts.end}} - -## Examples - -{% include {{page.version.version}}/sql/movr-statements.md %} - -### Add the `UNIQUE` constraint - -Adding the [`UNIQUE` constraint](unique.html) requires that all of a column's values be distinct from one another (except for `NULL` values). - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE users ADD CONSTRAINT id_name_unique UNIQUE (id, name); -~~~ - -### Add the `CHECK` constraint - -Adding the [`CHECK` constraint](check.html) requires that all of a column's values evaluate to `TRUE` for a Boolean expression. - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE rides ADD CONSTRAINT check_revenue_positive CHECK (revenue >= 0); -~~~ - -In the process of adding the constraint CockroachDB will run a background job to validate existing table data. If CockroachDB finds a row that violates the constraint during the validation step, the [`ADD CONSTRAINT`](add-constraint.html) statement will fail. - -#### Add constraints to columns created during a transaction - -You can add check constraints to columns that were created earlier in the transaction. For example: - -{% include_cached copy-clipboard.html %} -~~~ sql -> BEGIN; -> ALTER TABLE users ADD COLUMN is_owner STRING; -> ALTER TABLE users ADD CONSTRAINT check_is_owner CHECK (is_owner IN ('yes', 'no', 'unknown')); -> COMMIT; -~~~ - -~~~ -BEGIN -ALTER TABLE -ALTER TABLE -COMMIT -~~~ - -{{site.data.alerts.callout_info}} -The entire transaction will be rolled back, including any new columns that were added, in the following cases: - -- If an existing column is found containing values that violate the new constraint. -- If a new column has a default value or is a [computed column](computed-columns.html) that would have contained values that violate the new constraint. -{{site.data.alerts.end}} - -### Add the foreign key constraint with `CASCADE` - -To add a foreign key constraint, use the steps shown below. - -Given two tables, `users` and `vehicles`, without foreign key constraints: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE users; -~~~ - -~~~ - table_name | create_statement --------------+-------------------------------------------------------------- - users | CREATE TABLE users ( - | id UUID NOT NULL, - | city VARCHAR NOT NULL, - | name VARCHAR NULL, - | address VARCHAR NULL, - | credit_card VARCHAR NULL, - | CONSTRAINT users_pkey PRIMARY KEY (city ASC, id ASC) - | ) -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE vehicles; -~~~ - -~~~ - table_name | create_statement --------------+------------------------------------------------------------------------------------------------ - vehicles | CREATE TABLE vehicles ( - | id UUID NOT NULL, - | city VARCHAR NOT NULL, - | type VARCHAR NULL, - | owner_id UUID NULL, - | creation_time TIMESTAMP NULL, - | status VARCHAR NULL, - | current_location VARCHAR NULL, - | ext JSONB NULL, - | CONSTRAINT vehicles_pkey PRIMARY KEY (city ASC, id ASC), - | ) -(1 row) -~~~ - -You can include a [foreign key action](foreign-key.html#foreign-key-actions) to specify what happens when a foreign key is updated or deleted. - -Using `ON DELETE CASCADE` will ensure that when the referenced row is deleted, all dependent objects are also deleted. - -{{site.data.alerts.callout_danger}} -`CASCADE` does not list the objects it drops or updates, so it should be used with caution. -{{site.data.alerts.end}} - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE vehicles ADD CONSTRAINT users_fk FOREIGN KEY (city, owner_id) REFERENCES users (city, id) ON DELETE CASCADE; -~~~ - -{{site.data.alerts.callout_info}} - By default, referenced columns must be in the same database as the referencing foreign key column. To enable cross-database foreign key references, set the `sql.cross_db_fks.enabled` [cluster setting](cluster-settings.html) to `true`. -{{site.data.alerts.end}} - -### Drop and add a primary key constraint - -Suppose that you want to add `name` to the composite primary key of the `users` table, [without creating a secondary index of the existing primary key](#changing-primary-keys-with-add-constraint-primary-key). - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE users; -~~~ - -~~~ - table_name | create_statement --------------+-------------------------------------------------------------- - users | CREATE TABLE users ( - | id UUID NOT NULL, - | city VARCHAR NOT NULL, - | name VARCHAR NULL, - | address VARCHAR NULL, - | credit_card VARCHAR NULL, - | CONSTRAINT users_pkey PRIMARY KEY (city ASC, id ASC) - | ) -(1 row) -~~~ - -First, add a [`NOT NULL`](not-null.html) constraint to the `name` column with [`ALTER COLUMN`](alter-column.html). - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE users ALTER COLUMN name SET NOT NULL; -~~~ - -Then, in the same transaction, [`DROP`](drop-constraint.html) the existing `"primary"` constraint and `ADD` the new one: - -{% include_cached copy-clipboard.html %} -~~~ sql -> BEGIN; -> ALTER TABLE users DROP CONSTRAINT "primary"; -> ALTER TABLE users ADD CONSTRAINT "primary" PRIMARY KEY (city, name, id); -> COMMIT; -~~~ - -~~~ -NOTICE: primary key changes are finalized asynchronously; further schema changes on this table may be restricted until the job completes -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE users; -~~~ - -~~~ - table_name | create_statement --------------+--------------------------------------------------------------------- - users | CREATE TABLE users ( - | id UUID NOT NULL, - | city VARCHAR NOT NULL, - | name VARCHAR NOT NULL, - | address VARCHAR NULL, - | credit_card VARCHAR NULL, - | CONSTRAINT users_pkey PRIMARY KEY (city ASC, name ASC, id ASC), - | ) -(1 row) -~~~ - -Using [`ALTER PRIMARY KEY`](alter-primary-key.html) would have created a `UNIQUE` secondary index called `users_city_id_key`. Instead, there is just one index for the primary key constraint. - -### Add a unique index to a `REGIONAL BY ROW` table - -{% include {{page.version.version}}/sql/indexes-regional-by-row.md %} - -This example assumes you have a simulated multi-region database running on your local machine following the steps described in [Low Latency Reads and Writes in a Multi-Region Cluster](demo-low-latency-multi-region-deployment.html). It shows how a `UNIQUE` index is partitioned, but it's similar to how all indexes are partitioned on `REGIONAL BY ROW` tables. - -To show how the automatic partitioning of indexes on `REGIONAL BY ROW` tables works, we will: - -1. [Add a column](add-column.html) to the `users` table in the [MovR dataset](movr.html). -1. Add a [`UNIQUE` constraint](unique.html) to that column. -1. Verify that the index is automatically partitioned for better multi-region performance by using [`SHOW INDEXES`](show-index.html) and [`SHOW PARTITIONS`](show-partitions.html). - -First, add a column and its unique constraint. We'll use `email` since that is something that should be unique per user. - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER TABLE users ADD COLUMN email STRING; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER TABLE users ADD CONSTRAINT user_email_unique UNIQUE (email); -~~~ - -Next, issue the [`SHOW INDEXES`](show-index.html) statement. You will see that [the implicit region column](set-locality.html#set-the-table-locality-to-regional-by-row) that was added when the table [was converted to regional by row](demo-low-latency-multi-region-deployment.html#configure-regional-by-row-tables) is now indexed: - -{% include_cached copy-clipboard.html %} -~~~ sql -SHOW INDEXES FROM users; -~~~ - -~~~ - table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit --------------+-------------------+------------+--------------+-------------+-----------+---------+----------- - users | users_pkey | false | 1 | region | ASC | false | true - users | users_pkey | false | 2 | id | ASC | false | false - users | users_pkey | false | 3 | city | N/A | true | false - users | users_pkey | false | 4 | name | N/A | true | false - users | users_pkey | false | 5 | address | N/A | true | false - users | users_pkey | false | 6 | credit_card | N/A | true | false - users | users_pkey | false | 7 | email | N/A | true | false - users | user_email_unique | false | 1 | region | ASC | false | true - users | user_email_unique | false | 2 | email | ASC | false | false - users | user_email_unique | false | 3 | id | ASC | false | true - users | users_city_idx | true | 1 | region | ASC | false | true - users | users_city_idx | true | 2 | city | ASC | false | false - users | users_city_idx | true | 3 | id | ASC | false | true -(13 rows) -~~~ - -Next, issue the [`SHOW PARTITIONS`](show-partitions.html) statement. The output below (which is edited for length) will verify that the unique index was automatically [partitioned](partitioning.html) for you. It shows that the `user_email_unique` index is now partitioned by the database regions `europe-west1`, `us-east1`, and `us-west1`. - -{% include_cached copy-clipboard.html %} -~~~ sql -SHOW PARTITIONS FROM TABLE users; -~~~ - -~~~ - database_name | table_name | partition_name | column_names | index_name | partition_value | ... -----------------+------------+----------------+--------------+-------------------------+------------------+----- - movr | users | europe-west1 | region | users@user_email_unique | ('europe-west1') | ... - movr | users | us-east1 | region | users@user_email_unique | ('us-east1') | ... - movr | users | us-west1 | region | users@user_email_unique | ('us-west1') | ... -~~~ - -To ensure that the uniqueness constraint is enforced properly across regions when rows are inserted, or the `email` column of an existing row is updated, the database needs to do the following additional work when indexes are partitioned as shown above: - -1. Run a one-time-only validation query to ensure that the existing data in the table satisfies the unique constraint. -1. Thereafter, the [optimizer](cost-based-optimizer.html) will automatically add a "uniqueness check" when necessary to any [`INSERT`](insert.html), [`UPDATE`](update.html), or [`UPSERT`](upsert.html) statement affecting the columns in the unique constraint. - -{% include {{page.version.version}}/sql/locality-optimized-search.md %} - -### Using `DEFAULT gen_random_uuid()` in `REGIONAL BY ROW` tables - -To auto-generate unique row identifiers in `REGIONAL BY ROW` tables, use the [`UUID`](uuid.html) column with the `gen_random_uuid()` [function](functions-and-operators.html#id-generation-functions) as the [default value](default-value.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE users ( - id UUID NOT NULL DEFAULT gen_random_uuid(), - city STRING NOT NULL, - name STRING NULL, - address STRING NULL, - credit_card STRING NULL, - CONSTRAINT users_pkey PRIMARY KEY (city ASC, id ASC) -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO users (name, city) VALUES ('Petee', 'new york'), ('Eric', 'seattle'), ('Dan', 'seattle'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users; -~~~ - -~~~ - id | city | name | address | credit_card -+--------------------------------------+----------+-------+---------+-------------+ - cf8ee4e2-cd74-449a-b6e6-a0fb2017baa4 | new york | Petee | NULL | NULL - 2382564e-702f-42d9-a139-b6df535ae00a | seattle | Eric | NULL | NULL - 7d27e40b-263a-4891-b29b-d59135e55650 | seattle | Dan | NULL | NULL -(3 rows) -~~~ - -{{site.data.alerts.callout_info}} -When using `DEFAULT gen_random_uuid()` on columns in `REGIONAL BY ROW` tables, uniqueness checks on those columns are disabled by default for performance purposes. CockroachDB assumes uniqueness based on the way this column generates [`UUIDs`](uuid.html#create-a-table-with-auto-generated-unique-row-ids). To enable this check, you can modify the `sql.optimizer.uniqueness_checks_for_gen_random_uuid.enabled` [cluster setting](cluster-settings.html). Note that while there is virtually no chance of a [collision](https://en.wikipedia.org/wiki/Universally_unique_identifier#Collisions) occurring when enabling this setting, it is not truly zero. -{{site.data.alerts.end}} - -### Using implicit vs. explicit index partitioning in `REGIONAL BY ROW` tables - -In `REGIONAL BY ROW` tables, all indexes are partitioned on the region column (usually called [`crdb_region`](set-locality.html#crdb_region)). - -These indexes can either include or exclude the partitioning key (`crdb_region`) as the first column in the index definition: - -- If `crdb_region` is included in the index definition, a [`UNIQUE` index](unique.html) will enforce uniqueness on the set of columns, just like it would in a non-partitioned table. -- If `crdb_region` is excluded from the index definition, that serves as a signal that CockroachDB should enforce uniqueness on only the columns in the index definition. - -In the latter case, the index alone cannot enforce uniqueness on columns that are not a prefix of the index columns, so any time rows are [inserted](insert.html) or [updated](update.html) in a `REGIONAL BY ROW` table that has an implicitly partitioned `UNIQUE` index, the [optimizer](cost-based-optimizer.html) must add uniqueness checks. - -Whether or not to explicitly include `crdb_region` in the index definition depends on the context: - -- If you only need to enforce uniqueness at the region level, then including `crdb_region` in the `UNIQUE` index definition will enforce these semantics and allow you to get better performance on [`INSERT`](insert.html)s, [`UPDATE`](update.html)s, and [`UPSERT`](upsert.html)s, since there will not be any added latency from uniqueness checks. -- If you need to enforce global uniqueness, you should not include `crdb_region` in the `UNIQUE` (or [`PRIMARY KEY`](primary-key.html)) index definition, and the database will automatically ensure that the constraint is enforced. - -To illustrate the different behavior of explicitly vs. implicitly partitioned indexes, we will perform the following tasks: - -- Create a schema that includes an explicitly partitioned index, and an implicitly partitioned index. -- Check the output of several queries using `EXPLAIN` to show the differences in behavior between the two. - -1. Start [`cockroach demo`](cockroach-demo.html) as follows: - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach demo --geo-partitioned-replicas - ~~~ - -1. Create a multi-region database and an `employees` table. There are three indexes in the table, all `UNIQUE` and all partitioned by the `crdb_region` column. The table schema guarantees that both `id` and `email` are globally unique, while `desk_id` is only unique per region. The indexes on `id` and `email` are implicitly partitioned, while the index on `(crdb_region, desk_id)` is explicitly partitioned. `UNIQUE` indexes can only directly enforce uniqueness on all columns in the index, including partitioning columns, so each of these indexes enforce uniqueness for `id`, `email`, and `desk_id` per region, respectively. - - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE DATABASE multi_region_test_db PRIMARY REGION "europe-west1" REGIONS "us-west1", "us-east1"; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - USE multi_region_test_db; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE TABLE employee ( - id INT PRIMARY KEY, - email STRING UNIQUE, - desk_id INT, - UNIQUE (crdb_region, desk_id) - ) LOCALITY REGIONAL BY ROW; - ~~~ - -1. In the statement below, we add a new user with the required `id`, `email`, and `desk_id` columns. CockroachDB needs to do additional work to enforce global uniqueness for the `id` and `email` columns, which are implicitly partitioned. This additional work is in the form of "uniqueness checks" that the optimizer adds as part of mutation queries. - - {% include_cached copy-clipboard.html %} - ~~~ sql - EXPLAIN INSERT INTO employee VALUES (1, 'joe@example.com', 1); - ~~~ - - The `EXPLAIN` output below shows that the optimizer has added two `constraint-check` post queries to check the uniqueness of the implicitly partitioned indexes `id` and `email`. There is no check needed for `desk_id` (really `(crdb_region, desk_id)`), since that constraint is automatically enforced by the explicitly partitioned index we added in the [`CREATE TABLE`](create-table.html) statement above. - - ~~~ - info - -------------------------------------------------------------------------------------- - distribution: local - vectorized: true - - • root - │ - ├── • insert - │ │ into: employee(id, email, desk_id, crdb_region) - │ │ - │ └── • buffer - │ │ label: buffer 1 - │ │ - │ └── • values - │ size: 5 columns, 1 row - │ - ├── • constraint-check - │ │ - │ └── • error if rows - │ │ - │ └── • lookup join (semi) - │ │ table: employee@primary - │ │ equality: (lookup_join_const_col_@15, column1) = (crdb_region,id) - │ │ equality cols are key - │ │ pred: column10 != crdb_region - │ │ - │ └── • cross join - │ │ estimated row count: 3 - │ │ - │ ├── • values - │ │ size: 1 column, 3 rows - │ │ - │ └── • scan buffer - │ label: buffer 1 - │ - └── • constraint-check - │ - └── • error if rows - │ - └── • lookup join (semi) - │ table: employee@employee_email_key - │ equality: (lookup_join_const_col_@25, column2) = (crdb_region,email) - │ equality cols are key - │ pred: (column1 != id) OR (column10 != crdb_region) - │ - └── • cross join - │ estimated row count: 3 - │ - ├── • values - │ size: 1 column, 3 rows - │ - └── • scan buffer - label: buffer 1 - ~~~ - -1. The statement below updates the user's `email` column. Because the unique index on the `email` column is implicitly partitioned, the optimizer must perform a uniqueness check. - - {% include_cached copy-clipboard.html %} - ~~~ sql - EXPLAIN UPDATE employee SET email = 'joe1@exaple.com' WHERE id = 1; - ~~~ - - In the `EXPLAIN` output below, the optimizer performs a uniqueness check for `email` since we're not updating any other columns (see the `constraint-check` section). - - ~~~ - info - -------------------------------------------------------------------------------------------------------- - distribution: local - vectorized: true - - • root - │ - ├── • update - │ │ table: employee - │ │ set: email - │ │ - │ └── • buffer - │ │ label: buffer 1 - │ │ - │ └── • render - │ │ estimated row count: 1 - │ │ - │ └── • union all - │ │ estimated row count: 1 - │ │ limit: 1 - │ │ - │ ├── • scan - │ │ estimated row count: 1 (100% of the table; stats collected 1 minute ago) - │ │ table: employee@primary - │ │ spans: [/'us-east1'/1 - /'us-east1'/1] - │ │ - │ └── • scan - │ estimated row count: 1 (100% of the table; stats collected 1 minute ago) - │ table: employee@primary - │ spans: [/'europe-west1'/1 - /'europe-west1'/1] [/'us-west1'/1 - /'us-west1'/1] - │ - └── • constraint-check - │ - └── • error if rows - │ - └── • lookup join (semi) - │ table: employee@employee_email_key - │ equality: (lookup_join_const_col_@18, email_new) = (crdb_region,email) - │ equality cols are key - │ pred: (id != id) OR (crdb_region != crdb_region) - │ - └── • cross join - │ estimated row count: 3 - │ - ├── • values - │ size: 1 column, 3 rows - │ - └── • scan buffer - label: buffer 1 - ~~~ - -1. If we only update the user's `desk_id` as shown below, no uniqueness checks are needed, since the index on that column is explicitly partitioned (it's really `(crdb_region, desk_id)`). - - {% include_cached copy-clipboard.html %} - ~~~ sql - EXPLAIN UPDATE employee SET desk_id = 2 WHERE id = 1; - ~~~ - - Because no uniqueness check is needed, there is no `constraint-check` section in the `EXPLAIN` output. - - ~~~ - info - ------------------------------------------------------------------------------------------------ - distribution: local - vectorized: true - - • update - │ table: employee - │ set: desk_id - │ auto commit - │ - └── • render - │ estimated row count: 1 - │ - └── • union all - │ estimated row count: 1 - │ limit: 1 - │ - ├── • scan - │ estimated row count: 1 (100% of the table; stats collected 2 minutes ago) - │ table: employee@primary - │ spans: [/'us-east1'/1 - /'us-east1'/1] - │ - └── • scan - estimated row count: 1 (100% of the table; stats collected 2 minutes ago) - table: employee@primary - spans: [/'europe-west1'/1 - /'europe-west1'/1] [/'us-west1'/1 - /'us-west1'/1] - ~~~ - -## See also - -- [Constraints](constraints.html) -- [Foreign Key Constraint](foreign-key.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) -- [`RENAME CONSTRAINT`](rename-constraint.html) -- [`DROP CONSTRAINT`](drop-constraint.html) -- [`VALIDATE CONSTRAINT`](validate-constraint.html) -- [`ALTER COLUMN`](alter-column.html) -- [`CREATE TABLE`](create-table.html) -- [`ALTER TABLE`](alter-table.html) -- [`SHOW JOBS`](show-jobs.html) -- ['ALTER PRIMARY KEY'](alter-primary-key.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v22.1/add-region.md b/src/current/v22.1/add-region.md deleted file mode 100644 index 51e548cbcfa..00000000000 --- a/src/current/v22.1/add-region.md +++ /dev/null @@ -1,139 +0,0 @@ ---- -title: ADD REGION -summary: The ADD REGION statement adds a region to a multi-region database. -toc: true -docs_area: reference.sql ---- - - The `ALTER DATABASE .. ADD REGION` [statement](sql-statements.html) adds a [region](multiregion-overview.html#database-regions) to a [multi-region database](multiregion-overview.html). While CockroachDB processes an index modification or changing a table to or from a [`REGIONAL BY ROW` table](multiregion-overview.html#regional-by-row-tables), attempting to drop a region from the database containing that `REGIONAL BY ROW` table will produce an error. Similarly, while this statement is running, all index modifications and locality changes on [`REGIONAL BY ROW`](multiregion-overview.html#regional-by-row-tables) tables will be blocked. - -{% include enterprise-feature.md %} - -{{site.data.alerts.callout_info}} -`ADD REGION` is a subcommand of [`ALTER DATABASE`](alter-database.html). -{{site.data.alerts.end}} - -{{site.data.alerts.callout_danger}} -In order to add a region with `ADD REGION`, you must first set a primary database region with [`SET PRIMARY REGION`](set-primary-region.html), or at [database creation](create-database.html). For an example showing how to add a primary region with `ALTER DATABASE`, see [Set the primary region](#set-the-primary-region). -{{site.data.alerts.end}} - -## Synopsis - -
-{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_database_add_region.html %} -
- -## Parameters - -| Parameter | Description | -|-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `database_name` | The database to which you are adding a [region](multiregion-overview.html#database-regions). | -| `region_name` | The [region](multiregion-overview.html#database-regions) being added to this database. Allowed values include any region present in `SHOW REGIONS FROM CLUSTER`. | - -## Required privileges - -To add a region to a database, the user must have one of the following: - -- Membership to the [`admin`](security-reference/authorization.html#admin-role) role for the cluster. -- Either [ownership](security-reference/authorization.html#object-ownership) or the [`CREATE` privilege](security-reference/authorization.html#supported-privileges) for the database and all [`REGIONAL BY ROW`](multiregion-overview.html#regional-by-row-tables) tables in the database. - -## Examples - -{% include {{page.version.version}}/sql/multiregion-example-setup.md %} - -### Set the primary region - -Suppose you have a database `foo` in your cluster, and you want to make it a multi-region database. - -To add the first region to the database, or to set an already-added region as the primary region, use a [`SET PRIMARY REGION`](set-primary-region.html) statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER DATABASE foo SET PRIMARY REGION "us-east1"; -~~~ - -~~~ -ALTER DATABASE PRIMARY REGION -~~~ - -Given a cluster with multiple regions, any databases in that cluster that have not yet had their primary regions set will have their replicas spread as broadly as possible for resiliency. When a primary region is added to one of these databases: - -- All tables will be [`REGIONAL BY TABLE`](set-locality.html#regional-by-table) in the primary region by default. -- This means that all such tables will have all of their voting replicas and leaseholders moved to the primary region. This process is known as [rebalancing](architecture/replication-layer.html#leaseholder-rebalancing). - -### Add regions to a database - -To add more regions to a database that already has at least one region, use an `ADD REGION` statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER database foo ADD region "us-west1"; -~~~ - -~~~ -ALTER DATABASE ADD REGION -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER database foo ADD region "europe-west1"; -~~~ - -~~~ -ALTER DATABASE ADD REGION -~~~ - -### View a database's regions - -To view the regions associated with a multi-region database, use a [`SHOW REGIONS FROM DATABASE`](show-regions.html) statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -SHOW REGIONS FROM DATABASE foo; -~~~ - -~~~ - database | region | primary | zones ------------+--------------+---------+---------- - foo | us-east1 | true | {b,c,d} - foo | europe-west1 | false | {b,c,d} - foo | us-west1 | false | {a,b,c} -(3 rows) -~~~ - -### Drop a region from a database - -To [drop a region](drop-region.html) from a multi-region database, use a [`DROP REGION`](drop-region.html) statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER DATABASE foo DROP REGION "us-west1"; -~~~ - -~~~ -ALTER DATABASE DROP REGION -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -SHOW REGIONS FROM DATABASE foo; -~~~ - -~~~ - database | region | primary | zones ------------+----------+---------+---------- - foo | us-east1 | true | {b,c,d} -(1 row) -~~~ - -## See also - -- [Multi-Region Capabilities Overview](multiregion-overview.html) -- [`SET PRIMARY REGION`](set-primary-region.html) -- [`DROP REGION`](drop-region.html) -- [`SHOW REGIONS`](show-regions.html) -- [`ADD SUPER REGION`](add-super-region.html) -- [`DROP SUPER REGION`](drop-super-region.html) -- [`SHOW SUPER REGIONS`](show-super-regions.html) -- [`ALTER TABLE`](alter-table.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v22.1/add-super-region.md b/src/current/v22.1/add-super-region.md deleted file mode 100644 index 8e6f24c6bb9..00000000000 --- a/src/current/v22.1/add-super-region.md +++ /dev/null @@ -1,96 +0,0 @@ ---- -title: ADD SUPER REGION -summary: The ADD SUPER REGION statement creates a set of regions where data from regional tables with home rows in the super region is stored in the super region. -toc: true -docs_area: reference.sql ---- - - The `ALTER DATABASE .. ADD SUPER REGION` [statement](sql-statements.html) adds a [super region](multiregion-overview.html#super-regions) to a [multi-region database](multiregion-overview.html). - -{% include enterprise-feature.md %} - -{{site.data.alerts.callout_info}} -`ADD SUPER REGION` is a subcommand of [`ALTER DATABASE`](alter-database.html). -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -{% include feature-phases/preview.md %} -{{site.data.alerts.end}} - -## Synopsis - -
-{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_database_add_super_region.html %} -
- -## Parameters - -| Parameter | Description | -|-----------------+----------------------------------------------------------------------------------------------------------| -| `database_name` | The database to which you are adding a [super region](multiregion-overview.html#super-regions). | -| `name` | The name of the [super region](multiregion-overview.html#super-regions) being added to this database. | -| `name_list` | The super region consists of this set of [database regions](multiregion-overview.html#database-regions). | - -## Required privileges - -To add a super region to a database, the user must have one of the following: - -- Membership to the [`admin`](security-reference/authorization.html#admin-role) role for the cluster. -- Either [ownership](security-reference/authorization.html#object-ownership) or the [`CREATE` privilege](security-reference/authorization.html#supported-privileges) for the database. - -## Considerations - -{% include {{page.version.version}}/sql/super-region-considerations.md %} - -## Examples - -The examples in this section use the following setup. - -{% include {{page.version.version}}/sql/multiregion-example-setup.md %} - -#### Set up movr database regions - -{% include {{page.version.version}}/sql/multiregion-movr-add-regions.md %} - -#### Set up movr global tables - -{% include {{page.version.version}}/sql/multiregion-movr-global.md %} - -#### Set up movr regional tables - -{% include {{page.version.version}}/sql/multiregion-movr-regional-by-row.md %} - -### Enable super regions - -{% include {{page.version.version}}/sql/enable-super-regions.md %} - -### Add a super region to a database - -To add a super region to a multi-region database, use the `ALTER DATABASE ... ADD SUPER REGION` statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER DATABASE movr ADD SUPER REGION "usa" VALUES "us-east1", "us-west1"; -~~~ - -~~~ -ALTER DATABASE ADD SUPER REGION -~~~ - -### Allow user to modify a primary region that is part of a super region - -{% include {{page.version.version}}/sql/enable-super-region-primary-region-changes.md %} - -## See also - -- [Multi-Region Capabilities Overview](multiregion-overview.html) -- [Super regions](multiregion-overview.html#super-regions) -- [`SET PRIMARY REGION`](set-primary-region.html) -- [`DROP SUPER REGION`](drop-super-region.html) -- [`ALTER SUPER REGION`](alter-super-region.html) -- [`SHOW SUPER REGIONS`](show-super-regions.html) -- [`DROP REGION`](drop-region.html) -- [`SHOW REGIONS`](show-regions.html) -- [`ALTER TABLE`](alter-table.html) -- [`ALTER DATABASE`](alter-database.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v22.1/admission-control.md b/src/current/v22.1/admission-control.md deleted file mode 100644 index 7c2ad61bd4e..00000000000 --- a/src/current/v22.1/admission-control.md +++ /dev/null @@ -1,90 +0,0 @@ ---- -title: Admission Control -summary: Learn about admission control to maintain cluster performance and availability during high load. -toc: true -docs_area: develop ---- - -CockroachDB supports an admission control system to maintain cluster performance and availability when some nodes experience high load. When admission control is enabled, CockroachDB sorts request and response operations into work queues by priority, giving preference to higher priority operations. Internal operations critical to node health, like [node liveness heartbeats](cluster-setup-troubleshooting.html#node-liveness-issues), are high priority. The admission control system also prioritizes transactions that hold [locks](crdb-internal.html#cluster_locks), to reduce [contention](performance-best-practices-overview.html#transaction-contention) and release locks earlier. - -{{site.data.alerts.callout_info}} -Admission control is not available for CockroachDB {{ site.data.products.serverless }} clusters. -{{site.data.alerts.end}} - -## Use cases for admission control - -A well-provisioned CockroachDB cluster may still encounter performance bottlenecks at the node level, as stateful nodes can develop [hot spots](performance-best-practices-overview.html#hot-spots) that last until the cluster rebalances itself. When hot spots occur, they should not cause failures or degraded performance for important work. - -This is particularly important for CockroachDB {{ site.data.products.serverless }}, where one user tenant cluster experiencing high load should not degrade the performance or availability of a different, isolated tenant cluster running on the same host. - -Admission control can help if your cluster has degraded performance due to the following types of node overload scenarios: - -- The node has more than 32 runnable goroutines per CPU, visible in the **Runnable goroutines per CPU** graph in the [**Overload** dashboard](ui-overload-dashboard.html#runnable-goroutines-per-cpu). -- The node has a high number of files in level 0 of the Pebble LSM tree, visible in the **LSM L0 Health** graph in the [**Overload** dashboard](ui-overload-dashboard.html#lsm-l0-health). -- The node has high CPU usage, visible in the **CPU percent** graph in the [**Overload** dashboard](ui-overload-dashboard.html#cpu-percent). -- The node is experiencing out-of-memory errors, visible in the **Memory Usage** graph in the [**Hardware** dashboard](ui-hardware-dashboard.html#memory-usage). Even though admission control does not explicitly target controlling memory usage, it can reduce memory usage as a side effect of delaying the start of operation execution when the CPU is overloaded. - -{{site.data.alerts.callout_info}} -Admission control is beneficial when overall cluster health is good but some nodes are experiencing overload. If you see these overload scenarios on many nodes in the cluster, that typically means the cluster needs more resources. -{{site.data.alerts.end}} - -## Enable and disable admission control - -To enable and disable admission control, use the following [cluster settings](cluster-settings.html): - -- `admission.kv.enabled` for work performed by the [KV layer](architecture/distribution-layer.html). -- `admission.sql_kv_response.enabled` for work performed in the SQL layer when receiving [KV responses](architecture/distribution-layer.html). -- `admission.sql_sql_response.enabled` for work performed in the SQL layer when receiving [DistSQL responses](architecture/sql-layer.html#distsql). - -When you enable admission control Cockroach Labs recommends that you enable it for **all layers**. - -{% include_cached new-in.html version="v22.1" %} Admission control is enabled by default for all layers. - -## Work queues and ordering - -When admission control is enabled, request and response operations are sorted into work queues where the operations are organized by priority and transaction start time. - -Higher priority operations are processed first. The criteria for determining higher and lower priority operations is different at each processing layer, and is determined by the CPU and storage I/O of the operation. Write operations in the [KV storage layer](architecture/storage-layer.html) in particular are often the cause of performance bottlenecks, and admission control prevents [the Pebble storage engine](architecture/storage-layer.html#pebble) from experiencing high [read amplification](architecture/storage-layer.html#read-amplification). Critical cluster operations like node heartbeats are processed as high priority, as are transactions that hold [locks](crdb-internal.html#cluster_locks) in order to avoid [contention](performance-recipes.html#transaction-contention) and release locks earlier. - -The transaction start time is used within the priority queue and gives preference to operations with earlier transaction start times. For example, within the high priority queue operations with an earlier transaction start time are processed first. - -### Set quality of service level for a session - -{% include_cached new-in.html version="v22.1" %} In an overload scenario where CockroachDB cannot service all requests, you can identify which requests should be prioritized. This is often referred to as _quality of service_ (QoS). Admission control queues work throughout the system. To set the quality of service level on the admission control queues on behalf of SQL requests submitted in a session, use the `default_transaction_quality_of_service` [session variable](set-vars.html). The valid values are `critical`, `background`, and `regular`. Admission control must be enabled for this setting to have an effect. - -To increase the priority of subsequent SQL requests, run: - -{% include_cached copy-clipboard.html %} -~~~ sql -SET default_transaction_quality_of_service=critical; -~~~ - -To decrease the priority of subsequent SQL requests, run: - -{% include_cached copy-clipboard.html %} -~~~ sql -SET default_transaction_quality_of_service=background; -~~~ - -To reset the priority to the default session setting (in between background and critical), run: - -{% include_cached copy-clipboard.html %} -~~~ sql -SET default_transaction_quality_of_service=regular; -~~~ - -## Limitations - -Admission control works on the level of each node, not at the cluster level. The admission control system queues requests until the operations are processed or the request exceeds the timeout value (for example by using [`SET statement_timeout`](set-vars.html#supported-variables)). If you specify aggressive timeout values, the system may operate correctly but have low throughput as the operations exceed the timeout value while only completing part of the work. There is no mechanism for preemptively rejecting requests when the work queues are long. - -Organizing operations by priority can mean that higher priority operations consume all the available resources while lower priority operations remain in the queue until the operation times out. - -## Observe admission control performance - -The [DB Console Overload dashboard](ui-overload-dashboard.html) shows metrics related to the performance of the admission control system. - -## See also - -The [technical note for admission control](https://github.com/cockroachdb/cockroach/blob/master/docs/tech-notes/admission_control.md) for details on the design of the admission control system. - -{% include {{page.version.version}}/sql/server-side-connection-limit.md %} This may be useful in addition to your admission control settings. diff --git a/src/current/v22.1/advanced-changefeed-configuration.md b/src/current/v22.1/advanced-changefeed-configuration.md deleted file mode 100644 index 06124008f00..00000000000 --- a/src/current/v22.1/advanced-changefeed-configuration.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -title: Advanced Changefeed Configuration -summary: Tune changefeeds for high durability delivery or throughput. -toc: true -docs_area: stream_data ---- - -{{site.data.alerts.callout_danger}} -The configurations and settings explained on this page will significantly impact a changefeed's behavior. -{{site.data.alerts.end}} - -The following sections describe settings, configurations, and details to tune changefeeds for these use cases: - -- [High durability delivery](#tuning-for-high-durability-delivery) -- [High throughput](#tuning-for-high-throughput) - -Some specific options for the `kafka_sink_config` and `webhook_sink_config` parameters are discussed on this page. However, for more information on more specific tuning for Kafka and Webhook sinks see the following pages: - -- [Kafka sinks](changefeed-sinks.html#kafka-sink-configuration) -- [Webhook sinks](changefeed-sinks.html#webhook-sink-configuration) - -## Tuning for high durability delivery - -When designing a system that relies on high durability of message delivery — that is, not missing any message acknowledgement at the downstream sink — consider the following settings and configuration. Before tuning these settings we recommend reading details on our [changefeed at-least-once-delivery guarantee](changefeed-messages.html#ordering-guarantees). - -- Increase the number of seconds before [garbage collection](architecture/storage-layer.html#garbage-collection) with the [`gc.ttlseconds`](configure-replication-zones.html#gc-ttlseconds) setting to provide a higher recoverability window for data if a changefeed fails. For example, if a sink is unavailable, changes are queued until the sink is available again. While the sink is unavailable, changes will be retried until the garbage collection window is reached and then the data is removed. - - You can also use the [`protect_data_from_gc_on_pause`](create-changefeed.html#protect-pause) option in combination with [`on_error=pause`](create-changefeed.html#on-error) to explicitly pause a changefeed on a **non**-retryable error (instead of going into a failure state) and to then protect the changes from garbage collection. -- Determine what a successful write to Kafka is with the [`kafka_sink_config: {"RequiredAcks": "ALL"}`](changefeed-sinks.html#kafka-required-acks) option, which provides the highest consistency level. -- Use [Kafka](changefeed-sinks.html#kafka) or [cloud storage](changefeed-sinks.html#cloud-storage-sink) sinks when tuning for high durability delivery in changefeeds. Both Kafka and cloud storage sinks offer built-in advanced protocols, whereas the [webhook sink](changefeed-sinks.html#webhook-sink), while flexible, requires an understanding of how messages are acknowledged and committed by the particular system used for the webhook in order to ensure the durability of message delivery. -- Ensure that data is ingested downstream in its new format after a schema change by using the [`schema_change_events`](create-changefeed.html#schema-events) and [`schema_schange_policy`](create-changefeed.html#schema-policy) options. For example, setting `schema_change_events=column_changes` and `schema_change_policy=stop` will trigger an error to the `cockroach.log` file on a [schema change](changefeed-messages.html#schema-changes-with-column-backfill) and the changefeed to fail. - -## Tuning for high throughput - -When designing a system that needs to emit a lot of changefeed messages, whether it be steady traffic or a burst in traffic, consider the following settings and configuration: - -- Avoid using the [`resolved`](create-changefeed.html#resolved-option) option or set this to a higher duration. This will help to reduce emitted messages. -- Batch messages to your sink. See the [`Flush`](changefeed-sinks.html#kafka-flush) parameter for the `kafka_sink_config` option. When using cloud storage sinks, use the [`file_size`](create-changefeed.html#file-size) parameter to flush a file when it exceeds the specified size. -- Set the [`changefeed.memory.per_changefeed_limit`](cluster-settings.html) cluster setting to a higher limit to give more memory for buffering for a changefeed. This is useful in situations of heavy traffic. -- Use `avro` as the emitted message [format](create-changefeed.html#format) option with Kafka sinks; JSON encoding can potentially create a slowdown. -- Use the [`compression` option](create-changefeed.html#compression-opt) in cloud storage sinks with JSON to compress the changefeed data files. -- Increase the [`changefeed.backfill.concurrent_scan_requests` setting](cluster-settings.html), which controls the number of concurrent scan requests per node issued during a backfill event. The default behavior, when this setting is at `0`, is that the number of scan requests will be 3 times the number of nodes in the cluster (to a maximum of 100). While increasing this number will allow for higher throughput, it **will increase the cluster load overall**, including CPU and IO usage. -- Enable the [`kv.rangefeed.catchup_scan_iterator_optimization.enabled` setting](cluster-settings.html) to have [rangefeeds](create-and-configure-changefeeds.html#enable-rangefeeds) use time-bound iterators for catch-up scans when possible. Catch-up scans are run for each rangefeed request. This setting improves the performance of changefeeds during some [range-split operations](architecture/distribution-layer.html#range-splits). - -## See also - -- [Cluster Settings](cluster-settings.html) -- [Changefeed Sinks](changefeed-sinks.html) -- [`CREATE CHANGEFEED`](create-changefeed.html) diff --git a/src/current/v22.1/advanced-client-side-transaction-retries.md b/src/current/v22.1/advanced-client-side-transaction-retries.md deleted file mode 100644 index 291932995e4..00000000000 --- a/src/current/v22.1/advanced-client-side-transaction-retries.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -title: Advanced Client-side Transaction Retries -summary: Advanced client-side transaction retry features for library authors -toc: true -docs_area: develop ---- - -This page has instructions for authors of [database drivers and ORMs](install-client-drivers.html) who would like to implement client-side retries in their database driver or ORM for maximum efficiency and ease of use by application developers. - -{{site.data.alerts.callout_info}} -If you are an application developer who needs to implement an application-level retry loop, see the [Client-side intervention example](transactions.html#client-side-intervention-example). -{{site.data.alerts.end}} - -## Overview - -To improve the performance of transactions that fail due to [contention](performance-best-practices-overview.html#transaction-contention), CockroachDB includes a set of statements (listed below) that let you retry those transactions. Retrying transactions using these statements has the following benefits: - -1. When you use savepoints, you "hold your place in line" between attempts. Without savepoints, you're starting from scratch every time. -1. Transactions increase their priority each time they're retried, increasing the likelihood they will succeed. This has a lesser effect than #1. - -## How transaction retries work - -A retryable transaction goes through the process described below, which maps to the following SQL statements: - -{% include_cached copy-clipboard.html %} -~~~ sql -> BEGIN; -- #1 -> SAVEPOINT cockroach_restart; -- #2 --- ... various transaction statements ... -- #3 -> RELEASE SAVEPOINT cockroach_restart; -- #5 (Or #4, ROLLBACK, in case of retry error) -> COMMIT; -~~~ - -1. The transaction starts with the [`BEGIN`](begin-transaction.html) statement. - -1. The [`SAVEPOINT`](savepoint.html) statement shown here is a [retry savepoint](#retry-savepoints); that is, it declares the intention to retry the transaction in the case of contention errors. It must be executed after [`BEGIN`](begin-transaction.html), but before the first statement that manipulates a database. Although [nested transactions](savepoint.html#savepoints-for-nested-transactions) are supported in versions of CockroachDB 20.1 and later, a retry savepoint must be the outermost savepoint in a transaction. - -1. The statements in the transaction are executed. - -1. If a statement returns a retry error (identified via the `40001` error code or `"restart transaction"` string at the start of the error message), you can issue the [`ROLLBACK TO SAVEPOINT`](rollback-transaction.html) statement to restart the transaction and increase the transaction's priority. Alternately, the original [`SAVEPOINT`](savepoint.html) statement can be reissued to restart the transaction. - - You must now issue the statements in the transaction again. - - In cases where you do not want the application to retry the transaction, you can issue [`ROLLBACK`](rollback-transaction.html) at this point. Any other statements will be rejected by the server, as is generally the case after an error has been encountered and the transaction has not been closed. - -1. Once the transaction executes all statements without encountering contention errors, execute [`RELEASE SAVEPOINT`](release-savepoint.html) to commit the changes. If this succeeds, all changes made by the transaction become visible to subsequent transactions and are guaranteed to be durable if a crash occurs. - - In some cases, the [`RELEASE SAVEPOINT`](release-savepoint.html) statement itself can fail with a retry error, mainly because transactions in CockroachDB only realize that they need to be restarted when they attempt to commit. If this happens, the retry error is handled as described in step 4. - -## Retry savepoints - -A savepoint defined with the name `cockroach_restart` is a "retry savepoint" and is used to implement advanced client-side transaction retries. A retry savepoint differs from a [savepoint for nested transactions](savepoint.html#savepoints-for-nested-transactions) as follows: - -- It must be the outermost savepoint in the transaction. -- After a successful [`RELEASE`](release-savepoint.html), a retry savepoint does not allow further use of the transaction. The next statement must be a [`COMMIT`](commit-transaction.html). -- It cannot be nested. Issuing `SAVEPOINT cockroach_restart` two times in a row only creates a single savepoint marker (this can be verified with [`SHOW SAVEPOINT STATUS`](show-savepoint-status.html)). Issuing `SAVEPOINT cockroach_restart` after `ROLLBACK TO SAVEPOINT cockroach_restart` reuses the marker instead of creating a new one. - -Note that you can [customize the retry savepoint name](#customizing-the-retry-savepoint-name) to something other than `cockroach_restart` with a session variable if you need to. - -## Customizing the retry savepoint name - -{% include {{ page.version.version }}/misc/customizing-the-savepoint-name.md %} - -## Examples - -For examples showing how to use [`SAVEPOINT`](savepoint.html) and the other statements described on this page to implement library support for a programming language, see the following: - -- [Build a Java app with CockroachDB](build-a-java-app-with-cockroachdb.html), in particular the logic in the `runSQL` method. -- The source code of the [sqlalchemy-cockroachdb](https://github.com/cockroachdb/sqlalchemy-cockroachdb) adapter for SQLAlchemy. - -## See also - -- [Transactions](transactions.html) -- [`BEGIN`](begin-transaction.html) -- [`COMMIT`](commit-transaction.html) -- [`ROLLBACK`](rollback-transaction.html) -- [`SAVEPOINT`](savepoint.html) -- [`RELEASE SAVEPOINT`](release-savepoint.html) -- [`SHOW`](show-vars.html) -- [DB Console Transactions Page](ui-transactions-page.html) -- [CockroachDB Architecture: Transaction Layer](architecture/transaction-layer.html) \ No newline at end of file diff --git a/src/current/v22.1/alembic.md b/src/current/v22.1/alembic.md deleted file mode 100644 index 4aa75c7bcda..00000000000 --- a/src/current/v22.1/alembic.md +++ /dev/null @@ -1,575 +0,0 @@ ---- -title: Migrate CockroachDB Schemas with Alembic -summary: Learn how to use Alembic with a CockroachDB cluster. -toc: true -docs_area: develop ---- - -This page guides you through a series of simple database schema changes using the [Alembic](https://alembic.sqlalchemy.org/en/latest/) schema migration module with a simple Python application built on SQLAlchemy and CockroachDB. - -For a detailed tutorial about using Alembic, see [the Alembic documentation site](https://alembic.sqlalchemy.org/en/latest/tutorial.html). - -For information about specific migration tasks, see Alembic's [Cookbook](https://alembic.sqlalchemy.org/en/latest/cookbook.html). - -## Before you begin - -Before you begin the tutorial, [install CockroachDB](install-cockroachdb.html). - -## Step 1. Start a cluster and create a database - -1. Start a [demo cluster](cockroach-demo.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach demo --no-example-database - ~~~ - - This command creates a virtual cluster and opens a SQL shell to that cluster. - - {{site.data.alerts.callout_info}} - Leave this terminal window open for the duration of the tutorial. Closing the window will destroy the cluster and erase all data in it. - {{site.data.alerts.end}} - -1. Create the `bank` database: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE bank; - ~~~ - -## Step 2. Get the application code - -1. Open a new terminal, and clone the [`example-app-python-sqlalchemy`](https://github.com/cockroachlabs/example-app-python-sqlalchemy) GitHub repository: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ git clone git@github.com:cockroachlabs/example-app-python-sqlalchemy.git - ~~~ - -## Step 3. Install and initialize Alembic - -1. Navigate to the `example-app-python-sqlalchemy` project directory, and run the following commands to create and start a virtual environment: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ python3 -m venv env - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ source env/bin/activate - ~~~ - -1. Install the `alembic`, [`sqlalchemy-cockroachdb`](https://github.com/cockroachdb/sqlalchemy-cockroachdb), and [`psycopg2`](https://github.com/psycopg/psycopg2/) modules to the virtual environment: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ pip install sqlalchemy-cockroachdb psycopg2-binary alembic - ~~~ - - The `sqlalchemy-cockroachdb` and `psycopg2-binary` modules are required to use the CockroachDB adapter that the app uses to run transactions against a CockroachDB cluster. - - `alembic` includes the `sqlalchemy` module, which is a primary dependency of the `example-app-python-sqlalchemy` sample app. The `alembic` install also includes the `alembic` command line tool, which we use throughout the tutorial to manage migrations. - -1. Use the `alembic` command-line tool to initialize Alembic for the project: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ alembic init alembic - ~~~ - - ~~~ - Creating directory /path/example-app-python- - sqlalchemy/alembic ... done - Creating directory /path/example-app-python- - sqlalchemy/alembic/versions ... done - Generating /path/example-app-python- - sqlalchemy/alembic/script.py.mako ... done - Generating /path/example-app-python- - sqlalchemy/alembic/env.py ... done - Generating /path/example-app-python- - sqlalchemy/alembic/README ... done - Generating /path/example-app-python- - sqlalchemy/alembic.ini ... done - Please edit configuration/connection/logging settings in - '/path/example-app-python-sqlalchemy/alembic.ini' before - proceeding. - ~~~ - - This command creates a migrations directory called `alembic`. This directory will contain the files that specify the schema migrations for the app. - - The command also creates a properties file called `alembic.ini` at the top of the project directory. - -1. Open `alembic.ini` and update the `sqlalchemy.url` property to specify the correct connection string to your database: - - For example: - - ~~~ - sqlalchemy.url = cockroachdb://demo:demo72529@127.0.0.1:26257/bank?sslmode=require - ~~~ - - {{site.data.alerts.callout_info}} - You must use the `cockroachdb://` prefix in the connection string for SQLAlchemy to make sure the CockroachDB dialect is used. Using the `postgresql://` URL prefix to connect to your CockroachDB cluster will not work. - {{site.data.alerts.end}} - -## Step 4. Create and run a migration script - -1. Use the `alembic` command-line tool to create the first migration script: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ alembic revision -m "create accounts table" - ~~~ - - ~~~ - Generating /path/example-app-python-sqlalchemy/alembic/versions/ad72c7ec8b22_create_accounts_table.py ... done - ~~~ - -1. Open the newly-created migration file (`alembic/versions/ad72c7ec8b22_create_accounts_table.py`, in this case), and edit the `upgrade()` and `downgrade()` functions to read as follows: - - {% include_cached copy-clipboard.html %} - ~~~ python - def upgrade(): - op.create_table( - 'accounts', - sa.Column('id', sa.dialects.postgresql.UUID, primary_key=True), - sa.Column('balance', sa.Integer), - ) - - - def downgrade(): - op.drop_table('accounts') - ~~~ - - Running this migration creates the `accounts` table, with an `id` column and a `balance` column. - - Note that this file also specifies an operation for "downgrading" the migration. In this case, downgrading will drop the `accounts` table, effectively reversing the schema changes of the migration. - -1. Use the `alembic` tool to run this first migration: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ alembic upgrade head - ~~~ - - ~~~ - INFO [alembic.runtime.migration] Context impl CockroachDBImpl. - INFO [alembic.runtime.migration] Will assume non-transactional DDL. - INFO [alembic.runtime.migration] Running upgrade -> ad72c7ec8b22, create accounts table - ~~~ - - Specifying `head` runs the latest migration. This migration will create the `accounts` table. It will also create a table called `alembic_version`, which tracks the current migration version of the database. - -## Step 5. Verify the migration - -1. Open the terminal with the SQL shell to your demo cluster, and verify that the table was successfully created: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > USE bank; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW TABLES; - ~~~ - - ~~~ - schema_name | table_name | type | owner | estimated_row_count | locality - --------------+-----------------+-------+-------+---------------------+----------- - public | accounts | table | demo | 0 | NULL - public | alembic_version | table | demo | 1 | NULL - (2 rows) - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT * FROM alembic_version; - ~~~ - - ~~~ - version_num - ---------------- - ad72c7ec8b22 - (1 row) - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW COLUMNS FROM accounts; - ~~~ - - ~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden - --------------+-----------+-------------+----------------+-----------------------+-----------+------------ - id | UUID | false | NULL | | {primary} | false - balance | INT8 | true | NULL | | {primary} | false - (2 rows) - ~~~ - -1. In a different terminal, set the `DATABASE_URL` environment variable to the connection string for your cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ export DATABASE_URL=cockroachdb://demo:demo72529@127.0.0.1:26257/bank?sslmode=require - ~~~ - - The sample app reads in `DATABASE_URL` as the connection string to the database. - -1. Run the app to insert, update, and delete rows of data: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ python main.py - ~~~ - - ~~~ - Creating new accounts... - Created new account with id e9b4a9da-fbbb-40de-8c44-60c5c741764d and balance 93911. - Created new account with id 34a6b5d6-0f08-4435-89cb-c7fa30037926 and balance 989744. - ... - Created new account with id 18a7a209-72c3-48b6-986c-2631fff38274 and balance 969474. - Created new account with id 68e73209-fe2e-42db-a54e-c9d101990cdc and balance 382471. - Random account balances: - Account 9acbf774-3e22-4d75-aee0-37e63d3b1ab6: 403963 - Account 82451815-3a87-4d67-a9b0-7766726abd31: 315597 - Transferring 201981 from account 9acbf774-3e22-4d75-aee0-37e63d3b1ab6 to account 82451815-3a87-4d67-a9b0-7766726abd31... - Transfer complete. - New balances: - Account 9acbf774-3e22-4d75-aee0-37e63d3b1ab6: 201982 - Account 82451815-3a87-4d67-a9b0-7766726abd31: 517578 - Deleting existing accounts... - Deleted account 13d1b940-9a7b-47d6-b719-6a2b49a3b08c. - Deleted account 6958f8f9-4d38-424c-bf41-5673f20169b1. - Deleted account c628bd7f-3054-4cd6-b2c9-8c2e3def1720. - Deleted account f4268300-6d0a-4d6e-9489-ad30f215d1ad. - Deleted account feae4e4a-c003-4c29-b672-5422438a885b. - ~~~ - -## Step 6. Add additional migrations - -Suppose you want to add a new [computed column](computed-columns.html) to the `accounts` table that tracks which accounts are overdrawn. - -1. Create a new migration with the `alembic` tool: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ alembic revision -m "add overdrawn column" - ~~~ - - ~~~ - Generating /path/example-app-python-sqlalchemy/alembic/versions/fd88c68af7b5_add_overdrawn_column.py ... done - ~~~ - -1. Open the migration file (`alembic/versions/fd88c68af7b5_add_overdrawn_column.py`), update the imports, and edit the `upgrade()` and `downgrade()` functions: - - ~~~ python - from alembic import op - from sqlalchemy import Column, Boolean, Computed - - ... - - def upgrade(): - op.add_column('accounts', sa.Column('overdrawn', Boolean, Computed('CASE WHEN balance < 0 THEN True ELSE False END'))) - - - def downgrade(): - op.drop_column('accounts', 'overdrawn') - ~~~ - -1. Use the `alembic` tool to run the migration. - - Because this is the latest migration, you can specify `head`, or you can use the migration's ID (`fd88c68af7b5`): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ alembic upgrade fd88c68af7b5 - ~~~ - - ~~~ - INFO [alembic.runtime.migration] Context impl CockroachDBImpl. - INFO [alembic.runtime.migration] Will assume non-transactional DDL. - INFO [alembic.runtime.migration] Running upgrade ad72c7ec8b22 -> fd88c68af7b5, add_overdrawn_column - ~~~ - -1. In the terminal with the SQL shell to your demo cluster, verify that the column was successfully created: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW COLUMNS FROM accounts; - ~~~ - - ~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden - --------------+-----------+-------------+----------------+------------------------------------------------+-----------+------------ - id | UUID | false | NULL | | {primary} | false - balance | INT8 | true | NULL | | {primary} | false - overdrawn | BOOL | true | NULL | CASE WHEN balance < 0 THEN true ELSE false END | {primary} | false - (3 rows) - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT * FROM accounts; - ~~~ - - ~~~ - id | balance | overdrawn - ---------------------------------------+---------+------------ - 01894212-7f32-4e4c-b855-146630d928bc | 548554 | false - 033131cf-7c42-4021-9a53-f8a7597ec853 | 828874 | false - 041a2c5d-0bce-4ed4-a91d-a9e3a6e06632 | 768526 | false - 080be3a3-40f8-40c6-a0cc-a61c108db3f5 | 599729 | false - 08503245-ba1a-4255-8ca7-22b3688e69dd | 7962 | false - ... - ~~~ - - The changes will also be reflected in the `alembic_version` table. - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT * FROM alembic_version; - ~~~ - - ~~~ - version_num - ---------------- - fd88c68af7b5 - (1 row) - ~~~ - -## Execute Raw SQL with Alembic - -While [Alembic supports most SQL operations](https://alembic.sqlalchemy.org/en/latest/ops.html), you can always execute raw SQL using the `execute()` operation. - -{{site.data.alerts.callout_success}} -Executing DDL statements as raw SQL can be particularly helpful when using SQL syntax for DDL statements specific to CockroachDB, like [`ALTER TABLE ... ALTER PRIMARY KEY`](alter-primary-key.html) or [`ALTER TABLE ... SET LOCALITY`](set-locality.html) statements. -{{site.data.alerts.end}} - -For example, the raw SQL for the second migration would look something like this: - -~~~ sql -ALTER TABLE accounts ADD COLUMN overdrawn BOOLEAN AS ( - CASE - WHEN balance < 0 THEN True - ELSE False - END -) STORED; -~~~ - -To make the second migration use raw SQL instead of Alembic operations, open `alembic/versions/fd88c68af7b5_add_overdrawn_column.py`, and edit the `upgrade()` function to use `execute()` instead of the operation-specific function: - -~~~ python -def upgrade(): - op.execute(text("""ALTER TABLE accounts ADD COLUMN overdrawn BOOLEAN AS ( - CASE - WHEN balance < 0 THEN True - ELSE False - END - ) STORED;""")) -~~~ - -Before running this migration, downgrade the original migration: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ alembic downgrade -1 -~~~ - -~~~ -INFO [alembic.runtime.migration] Context impl CockroachDBImpl. -INFO [alembic.runtime.migration] Will assume non-transactional DDL. -INFO [alembic.runtime.migration] Running downgrade fd88c68af7b5 -> ad72c7ec8b22, add_overdrawn_column -~~~ - -Then, in the SQL shell to the demo cluster, verify that the `overdrawn` column has been dropped from the table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM accounts; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden ---------------+-----------+-------------+----------------+-----------------------+-----------+------------ - id | UUID | false | NULL | | {primary} | false - balance | INT8 | true | NULL | | {primary} | false -(2 rows) -~~~ - -Now, run the updated migration script: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ alembic upgrade fd88c68af7b5 -~~~ - -~~~ -INFO [alembic.runtime.migration] Context impl CockroachDBImpl. -INFO [alembic.runtime.migration] Will assume non-transactional DDL. -INFO [alembic.runtime.migration] Running upgrade ad72c7ec8b22 -> fd88c68af7b5, add_overdrawn_column -~~~ - -And verify that the column has been added to the table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM accounts; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden ---------------+-----------+-------------+----------------+------------------------------------------------+-----------+------------ - id | UUID | false | NULL | | {primary} | false - balance | INT8 | true | NULL | | {primary} | false - overdrawn | BOOL | true | NULL | CASE WHEN balance < 0 THEN true ELSE false END | {primary} | false -(3 rows) -~~~ - -## Auto-generate a Migration - -Alembic can automatically generate migrations, based on changes to the models in your application source code. - -Let's use the same example `overdrawn` computed column from above. - -First, downgrade the `fd88c68af7b5` migration: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ alembic downgrade -1 -~~~ - -~~~ -INFO [alembic.runtime.migration] Context impl CockroachDBImpl. -INFO [alembic.runtime.migration] Will assume non-transactional DDL. -INFO [alembic.runtime.migration] Running downgrade fd88c68af7b5 -> ad72c7ec8b22, add_overdrawn_column -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM accounts; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden ---------------+-----------+-------------+----------------+-----------------------+-----------+------------ - id | UUID | false | NULL | | {primary} | false - balance | INT8 | true | NULL | | {primary} | false -(2 rows) -~~~ - -Delete the old migration file: -{% include_cached copy-clipboard.html %} -~~~ shell -rm alembic/versions/fd88c68af7b5_add_overdrawn_column.py -~~~ - -Open the `models.py` file in the app's project, and add the `overdrawn` column to the `Account` class definition: - -~~~ python -from sqlalchemy import Column, Integer, Boolean, Computed - -... - -class Account(Base): - """The Account class corresponds to the "accounts" database table. - """ - __tablename__ = 'accounts' - id = Column(UUID(as_uuid=True), primary_key=True) - balance = Column(Integer) - overdrawn = Column('overdrawn', Boolean, Computed('CASE WHEN balance < 0 THEN True ELSE False END')) -~~~ - -Then, open the `alembic/env.py` file, and add the following import to the top of the file: - -~~~ python -from ..models import Base -~~~ - -And update the variable `target_metadata` to read as follows: - -~~~ python -target_metadata = Base.metadata -~~~ - -These two lines import the database model metadata from the app. - -Use the `alembic` command-line tool to auto-generate the migration from the models defined in the app: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ alembic revision --autogenerate -m "add overdrawn column" -~~~ - -~~~ -INFO [alembic.runtime.migration] Context impl CockroachDBImpl. -INFO [alembic.runtime.migration] Will assume non-transactional DDL. -INFO [alembic.autogenerate.compare] Detected added column 'accounts.overdrawn' - Generating /path/example-app-python-sqlalchemy/alembic/versions/44fa7043e441_add_overdrawn_column.py ... done -~~~ - -Alembic creates a new migration file (`44fa7043e441_add_overdrawn_column.py`, in this case). - -If you open this file, you'll see that it looks very similar to the one you manually created earlier in the tutorial. - -~~~ python -... -def upgrade(): - # ### commands auto generated by Alembic - please adjust! ### - op.add_column('accounts', sa.Column('overdrawn', sa.Boolean(), sa.Computed('CASE WHEN balance < 0 THEN True ELSE False END', ), nullable=True)) - # ### end Alembic commands ### - - -def downgrade(): - # ### commands auto generated by Alembic - please adjust! ### - op.drop_column('accounts', 'overdrawn') - # ### end Alembic commands ### -~~~ - -Run the migration: - - -{% include_cached copy-clipboard.html %} -~~~ shell -$ alembic upgrade 44fa7043e441 -~~~ - -~~~ -INFO [alembic.runtime.migration] Context impl CockroachDBImpl. -INFO [alembic.runtime.migration] Will assume non-transactional DDL. -INFO [alembic.runtime.migration] Running upgrade ad72c7ec8b22 -> 44fa7043e441, add overdrawn column -~~~ - -Verify that the new column exists in the `accounts` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM accounts; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden ---------------+-----------+-------------+----------------+------------------------------------------------+-----------+------------ - id | UUID | false | NULL | | {primary} | false - balance | INT8 | true | NULL | | {primary} | false - overdrawn | BOOL | true | NULL | CASE WHEN balance < 0 THEN true ELSE false END | {primary} | false -(3 rows) -~~~ - -## Report Issues with Alembic and CockroachDB - -If you run into problems, please file an issue in the [`alembic` repository](https://github.com/sqlalchemy/alembic/issues), including the following details about the environment where you encountered the issue: - -- CockroachDB version ([`cockroach version`](cockroach-version.html)) -- Alembic version -- Operating system -- Steps to reproduce the behavior - -## See Also - -- [`cockroach demo`](cockroach-demo.html) -- [Alembic documentation](https://alembic.sqlalchemy.org/en/latest/) -- [`alembic` GitHub repository](https://github.com/sqlalchemy/alembic) -- [Client connection parameters](connection-parameters.html) -- [Third-Party Database Tools](third-party-database-tools.html) -- [Learn CockroachDB SQL](learn-cockroachdb-sql.html) diff --git a/src/current/v22.1/alter-backup.md b/src/current/v22.1/alter-backup.md deleted file mode 100644 index efa08daf699..00000000000 --- a/src/current/v22.1/alter-backup.md +++ /dev/null @@ -1,101 +0,0 @@ ---- -title: ALTER BACKUP -summary: Use the ALTER BACKUP statement to add new KMS encryption keys to backups. -toc: true -docs_area: reference.sql ---- - -{% include enterprise-feature.md %} - -{% include_cached new-in.html version="v22.1" %} The `ALTER BACKUP` statement allows for new KMS encryption keys to be applied to an existing chain of encrypted backups ([full](take-full-and-incremental-backups.html#full-backups) and [incremental](take-full-and-incremental-backups.html#incremental-backups)). Each `ALTER BACKUP` statement must include the new KMS encryption key with `NEW_KMS`, and use `WITH OLD_KMS` to refer to at least one of the KMS URIs that were originally used to encrypt the backup. - -After an `ALTER BACKUP` statement successfully completes, subsequent [`BACKUP`](backup.html), [`RESTORE`](restore.html), and [`SHOW BACKUP`](show-backup.html) statements can use any of the existing or new KMS URIs to decrypt the backup. - -CockroachDB supports AWS and Google Cloud KMS keys. For more detail on encrypted backups and restores, see [Take and Restore Encrypted Backups](take-and-restore-encrypted-backups.html). - -## Synopsis - -
-{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_backup.html %} -
- -## Parameters - -Parameter | Description -------------------+------------------------------------------------------------------------------------------------------------------------- -`subdirectory` | The subdirectory containing the target **full** backup at the given `collectionURI`. -`LATEST` | The most recent backup at the given `collectionURI`. -`collectionURI` | The URI that holds the backup collection. -`ADD NEW_KMS` | Apply the new KMS encryption key to the target backup. -`WITH OLD_KMS` | Reference one of the existing KMS URI(s) originally used to encrypt the backup. -`kmsURI` | The [URI](take-and-restore-encrypted-backups.html#uri-formats) for the KMS key. - -## Required privileges - -- `ALTER BACKUP` can only be run by members of the [`admin` role](security-reference/authorization.html#admin-role). By default, the `root` user belongs to the `admin` role. -- `ALTER BACKUP` requires full read and write permissions to the target cloud storage bucket. - -The backup collection's URI does **not** require the [`admin` role](security-reference/authorization.html#admin-role) when using `s3` or `gs` using [`SPECIFIED`](use-cloud-storage-for-bulk-operations.html#authentication) credentials. The backup collection's URI **does** require the [`admin` role](security-reference/authorization.html#admin-role) when using `s3` or `gs` with [`IMPLICIT`](use-cloud-storage-for-bulk-operations.html#authentication) credentials. - -We recommend using [cloud storage for bulk operations](use-cloud-storage-for-bulk-operations.html). - -## Examples - -`ALTER BACKUP` will apply the new encryption information to the entire chain of backups ([full](take-full-and-incremental-backups.html#full-backups) and [incremental](take-full-and-incremental-backups.html#incremental-backups)). - -{{site.data.alerts.callout_info}} -When running `ALTER BACKUP` with a subdirectory, the statement must point to a [full backup](take-full-and-incremental-backups.html#full-backups) in the backup collection. -{{site.data.alerts.end}} - -See [Use Cloud Storage for Bulk Operations](use-cloud-storage-for-bulk-operations.html) for more detail on authenticating to your cloud storage bucket. - -### Add an AWS KMS key to an encrypted backup - -To add a new KMS key to the most recent backup: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER BACKUP LATEST IN 's3://{BUCKET NAME}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' - ADD NEW_KMS = 'aws:///{new-key}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}®ION={location}' - WITH OLD_KMS = 'aws:///{old-key}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}®ION={location}'; -~~~ - -To add a new KMS key to a specific backup, issue an `ALTER BACKUP` statement that points to the full backup: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER BACKUP '2022/03/23-213101.37' IN 's3://{BUCKET NAME}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' - ADD NEW_KMS = 'aws:///{new-key}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}®ION={location}' - WITH OLD_KMS = 'aws:///{old-key}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}®ION={location}'; -~~~ - -To list backup directories at a collection's URI, see [`SHOW BACKUP`](show-backup.html). - -### Add a Google Cloud KMS key to an encrypted backup - -To add a new KMS key to the most recent backup: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER BACKUP LATEST IN 'gs://{BUCKET NAME}?AUTH=specified&CREDENTIALS={ENCODED KEY}' - ADD NEW_KMS = 'gs:///projects/{project name}/locations/{location}/keyRings/{key ring name}/cryptoKeys/{new key}?AUTH=specified&CREDENTIALS={encoded key}' - WITH OLD_KMS = 'gs:///projects/{project name}/locations/{location}/keyRings/{key ring name}/cryptoKeys/{old key}?AUTH=specified&CREDENTIALS={encoded key}'; -~~~ - -To add a new KMS key to a specific backup, issue an `ALTER BACKUP` statement that points to the full backup: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER BACKUP '2022/03/23-213101.37' IN 'gs://{BUCKET NAME}?AUTH=specified&CREDENTIALS={ENCODED KEY}' - ADD NEW_KMS = 'gs:///projects/{project name}/locations/{location}/keyRings/{key ring name}/cryptoKeys/{new key}?AUTH=specified&CREDENTIALS={encoded key}' - WITH OLD_KMS = 'gs:///projects/{project name}/locations/{location}/keyRings/{key ring name}/cryptoKeys/{old key}?AUTH=specified&CREDENTIALS={encoded key}'; -~~~ - -To list backup directories at a collection's URI, see [`SHOW BACKUP`](show-backup.html). - -## See also - -- [Take and Restore Encrypted Backups](take-and-restore-encrypted-backups.html) -- [`BACKUP`](backup.html) -- [`RESTORE`](restore.html) -- [Use Cloud Storage for Bulk Operations](use-cloud-storage-for-bulk-operations.html) diff --git a/src/current/v22.1/alter-changefeed.md b/src/current/v22.1/alter-changefeed.md deleted file mode 100644 index e4b64a86524..00000000000 --- a/src/current/v22.1/alter-changefeed.md +++ /dev/null @@ -1,256 +0,0 @@ ---- -title: ALTER CHANGEFEED -summary: Use the ALTER CHANGEFEED statement to add and drop changefeed targets, as well as set and unset options. -toc: true -docs_area: reference.sql ---- - -{% include enterprise-feature.md %} - -{% include_cached new-in.html version="v22.1" %} The `ALTER CHANGEFEED` statement modifies an existing [changefeed](change-data-capture-overview.html). You can use `ALTER CHANGEFEED` to do the following: - -- Add new target tables to a changefeed. -- Remove target tables from a changefeed. -- Set new options on a changefeed. -- Remove existing options from a changefeed. - -The statement will return a job ID and the new job description. - -It is necessary to [**pause**](pause-job.html) a changefeed before running the `ALTER CHANGEFEED` statement against it. For an example of a changefeed modification using `ALTER CHANGEFEED`, see [Modify a changefeed](#modify-a-changefeed). - -## Synopsis - -
-{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_changefeed.html %} -
- -## Parameters - -Parameter | Description -----------------------------------------+------------------------------------------------------------------------------------------------------------------------- -`job_ID` | Specify the changefeed `job_ID` to modify. -`WITH` | Use `ADD {tables} WITH initial_scan` to perform a scan when adding a target table or multiple target tables. The `ALTER CHANGEFEED` statement does not perform an initial scan by default, regardless of whether [`initial_scan`](create-changefeed.html#initial-scan) was set with the **original** `CREATE CHANGEFEED` statement. It is also possible to explicitly state `ADD {tables} WITH no_initial_scan`, although the default makes this unnecessary. See further details in the [Options](#scan-details) section. -`ADD` | Add a new target table to a changefeed. See the [example](#add-targets-to-a-changefeed). -`DROP` | Drop a target table from a changefeed. It is **not** possible to drop all target tables from a changefeed. See the [example](#drop-targets-from-a-changefeed). -`SET` | Set new options on a changefeed. `ALTER CHANGEFEED ... SET ...` uses the [`CREATE CHANGEFEED`](create-changefeed.html#options) options with some [exceptions](#option-exceptions). See the [example](#set-options-on-a-changefeed). -`UNSET` | Remove options that were set with the original `CREATE CHANGEFEED` statement with some [exceptions](#option-exceptions). See the [example](#unset-options-on-a-changefeed). - -When the listed parameters are used together in the same statement, all changes will apply at the same time with no particular order of operations. - -{{site.data.alerts.callout_info}} -{% include {{ page.version.version }}/cdc/initial-scan-limit-alter-changefeed.md %} -{{site.data.alerts.end}} - -### Options - -Consider the following when specifying options with `ALTER CHANGEFEED`: - -- You can set a different [sink URI](changefeed-sinks.html#sink-uri) for an existing changefeed with the `sink` option. It is **not** possible to change the sink type. For example, you can use `SET sink = 'gs://{BUCKET NAME}?AUTH=IMPLICIT'` to use a different Google Cloud Storage bucket. However, you cannot use the `sink` option to move to Amazon S3 (`s3://`) or Kafka (`kafka://`). See the [Set options on a changefeed](#set-options-on-a-changefeed) example. - -- The majority of [`CREATE CHANGEFEED`](create-changefeed.html#options) options are compatible with `SET`/`UNSET`. This excludes the following options, which you **cannot** use in an `ALTER CHANGEFEED` statement: - - [`cursor`](create-changefeed.html#cursor-option) - - [`end_time`](create-changefeed.html#end-time) - - [`full_table_name`](create-changefeed.html#full-table-option): This option will not apply to existing tables. To use the fully qualified table name, it is necessary to create a new changefeed. - - [`initial_scan_only`](create-changefeed.html#initial-scan) - -- To use [`initial_scan`](create-changefeed.html#initial-scan) with `ALTER CHANGEFEED`, it is necessary to define a `WITH` clause when running `ADD`. This will set these options on the specific table(s): - - ~~~ sql - ALTER CHANGEFEED {job ID} ADD movr.rides, movr.vehicles WITH initial_scan SET updated UNSET resolved; - ~~~ - - Setting `initial_scan` will trigger an initial scan on the newly added table. You may also explicitly define `no_initial_scan`, though this is already the default behavior. The changefeed does not track the application of this option post scan. This means that you will not see the option listed in output or after a `SHOW CHANGEFEED JOB` statement. - -## Required privileges - -To alter a changefeed, the user must be a member of the `admin` role or have the [`CREATECHANGEFEED`](create-user.html#create-a-user-that-can-control-changefeeds) parameter set. - -## Examples - -### Modify a changefeed - -To use the `ALTER CHANGEFEED` statement to modify a changefeed, it is necessary to first pause the running changefeed. The following example demonstrates creating a changefeed, pausing the changefeed, modifying it, and then resuming the changefeed. - -{{site.data.alerts.callout_info}} -For more information on enabling changefeeds, see [Create and Configure Changefeeds](create-and-configure-changefeeds.html). -{{site.data.alerts.end}} - -1. First, create the changefeed. This example changefeed will emit change messages to a cloud storage sink on two watched tables. The emitted messages will include the [`resolved`](create-changefeed.html#resolved-option), [`updated`](create-changefeed.html#updated-option), and [`schema_change_policy`](create-changefeed.html#schema-policy) options: - - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE CHANGEFEED FOR TABLE movr.users, movr.vehicles INTO 's3://{BUCKET_NAME}?AWS_ACCESS_KEY_ID={ACCESS_KEY_ID}&AWS_SECRET_ACCESS_KEY={SECRET_ACCESS_KEY}' - WITH resolved, updated, schema_change_policy = backfill; - ~~~ - - ~~~ - job_id - ---------------------- - 745448689649516545 - (1 row) - ~~~ - -1. Use [`SHOW CHANGEFEED JOB`](show-jobs.html#show-changefeed-jobs) with the job_ID to view the details of a changefeed: - - {% include_cached copy-clipboard.html %} - ~~~ sql - SHOW CHANGEFEED JOB 745448689649516545; - ~~~ - - ~~~ - job_id | description | user_name | status | running_status | created | started | finished | modified | high_water_timestamp | error | sink_uri | full_table_names | topics | format - -------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+---------+------------------------------------------+---------------------------+----------------------------+----------+----------------------------+--------------------------------+-------+--------------------------------------------------------------------------------------------------------+------------------------------------------+--------+--------- - 745448689649516545 | CREATE CHANGEFEED FOR TABLE movr.users, movr.vehicles INTO 's3://{BUCKET_NAME}?AWS_ACCESS_KEY_ID={ACCESS_KEY_ID}&AWS_SECRET_ACCESS_KEY=redacted' WITH resolved, schema_change_policy = 'backfill', updated | root | running | running: resolved=1647563286.239010012,0 | 2022-03-18 00:28:06.24559 | 2022-03-18 00:28:06.276709 | NULL | 2022-03-18 00:28:37.250323 | 1647563313622679573.0000000000 | | s3://{BUCKET_NAME}?AWS_ACCESS_KEY_ID={ACCESS_KEY_ID}&AWS_SECRET_ACCESS_KEY=redacted | {movr.public.vehicles,movr.public.users} | NULL | json - (1 row) - ~~~ - - To output a list of all changefeeds on the cluster, run the following: - - {% include_cached copy-clipboard.html %} - ~~~ sql - SHOW CHANGEFEED JOBS; - ~~~ - -1. In preparation for modifying the created changefeed, use [`PAUSE JOB`](pause-job.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - PAUSE JOB 745448689649516545; - ~~~ - -1. With the changefeed paused, run the `ALTER CHANGEFEED` statement with `ADD`, `DROP`, `SET`, or `UNSET` to change the target tables or options: - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER CHANGEFEED 745448689649516545 DROP movr.vehicles UNSET resolved SET diff; - ~~~ - - ~~~ - job_id | job_description - -------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - 745448689649516545 | CREATE CHANGEFEED FOR TABLE movr.public.users INTO 's3://{BUCKET_NAME}?AWS_ACCESS_KEY_ID={ACCESS_KEY_ID}&AWS_SECRET_ACCESS_KEY=redacted' WITH diff, schema_change_policy = 'backfill', updated - (1 row) - ~~~ - - The output from `ALTER CHANGEFEED` will show the `CREATE CHANGEFEED` statement with the options you've defined. After modifying a changefeed with `ALTER CHANGEFEED`, the `CREATE` description will show the fully qualified table name. - - For an explanation on each of these options, see the `CREATE CHANGEFEED` [options](create-changefeed.html#options). - -1. Resume the changefeed job with `RESUME JOB`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - RESUME JOB 745448689649516545; - ~~~ - -### Add targets to a changefeed - -The following statement adds the `vehicles` and `rides` tables as new table targets to the changefeed: - -{% include_cached copy-clipboard.html %} -~~~ sql - ALTER CHANGEFEED {job_ID} ADD movr.rides, movr.vehicles; -~~~ - -To add a table that has [column families](column-families.html), see the [example](#modify-a-changefeed-targeting-tables-with-column-families). - -### Drop targets from a changefeed - -The following statement removes the `rides` table from the changefeed's table targets: - -{% include_cached copy-clipboard.html %} -~~~ sql - ALTER CHANGEFEED {job_ID} DROP movr.rides; -~~~ - -### Set options on a changefeed - -Use `SET` to add a new option(s) to a changefeed: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER CHANGEFEED {job_ID} SET resolved='10s', envelope=key_only; -~~~ - -`ALTER CHANGEFEED ... SET` can implement the [`CREATE CHANGEFEED`](create-changefeed.html#options) options with some [exceptions](#options). - -Use the `sink` option to change the sink URI to which the changefeed emits messages: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER CHANGEFEED {job_ID} - SET sink = 's3://{BUCKET NAME}?AWS_ACCESS_KEY_ID={ACCESS_KEY_ID}&AWS_SECRET_ACCESS_KEY={SECRET_ACCESS_KEY}' - UNSET resolved; -~~~ - -The type (or scheme) of the sink **cannot** change. That is, if the changefeed was originally sending messages to `kafka://`, for example, then you can only change to a different Kafka URI. Similarly, for cloud storage sinks, the cloud storage scheme must remain the same (e.g., `s3://`), but you can change to a different storage sink on the same cloud provider. - -To change the [sink type](changefeed-sinks.html), create a new changefeed. - -### Unset options on a changefeed - -To remove options from a changefeed, use `UNSET`: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER CHANGEFEED {job_ID} UNSET resolved, diff; -~~~ - -### Modify a changefeed targeting tables with column families - -To add a table with [column families](column-families.html) when modifying a changefeed, perform one of the following: - -- Use the `FAMILY` keyword to define specific families: - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER CHANGEFEED {job_ID} ADD database.table FAMILY f1, database.table FAMILY f2; - ~~~ - -- Or, set the [`split_column_families`](create-changefeed.html#split-column-families) option: - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER CHANGEFEED {job_ID} ADD database.table SET split_column_families; - ~~~ - -To remove a table with column families as a target from the changefeed, you must `DROP` it in the same way that you added it originally as a changefeed target. For example: - -- If you used `FAMILY` to add the table to the changefeed, use `FAMILY` when removing it: - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER CHANGEFEED {job_ID} DROP database.table FAMILY f1, database.table FAMILY f2; - ~~~ - - When using the `FAMILY` keyword, it is possible to remove only one family at a time as needed. You will receive an error if you try to remove a table without specifying the `FAMILY` keyword. - -- Or, if you originally added the whole table and its column families with `split_column_families`, then remove it without using the `FAMILY` keyword: - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER CHANGEFEED {job_ID} DROP database.table; - ~~~ - -For further discussion on using the `FAMILY` keyword and `split_column_families`, see [Tables with column families in changefeeds](changefeeds-on-tables-with-column-families.html). - -## Known limitations - -- It is necessary to [`PAUSE`](pause-job.html) the changefeed before performing any `ALTER CHANGEFEED` statement. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/77171) -- `ALTER CHANGEFEED` will accept duplicate targets without sending an error. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/78285) -- CockroachDB does not keep track of the [`initial_scan`](create-changefeed.html#initial-scan) or `initial_scan_only` options applied to tables. For example: - - ~~~ sql - ALTER CHANGEFEED {job_ID} ADD table WITH initial_scan; - ~~~ - - This will trigger an initial scan of the table and the changefeed will track `table`. The changefeed will **not** track `initial_scan` specified as an option, so it will not display in the output or after a `SHOW CHANGEFEED JOB` statement. -- {% include {{ page.version.version }}/cdc/initial-scan-limit-alter-changefeed.md %} - -## See also - -- [Change Data Capture Overview](change-data-capture-overview.html) -- [`CREATE CHANGEFEED`](create-changefeed.html) -- [Create and Configure Changefeeds](create-and-configure-changefeeds.html) -- [Changefeed Sinks](changefeed-sinks.html) -- [`SHOW JOBS`](show-jobs.html) diff --git a/src/current/v22.1/alter-column.md b/src/current/v22.1/alter-column.md deleted file mode 100644 index 1db5f9598e4..00000000000 --- a/src/current/v22.1/alter-column.md +++ /dev/null @@ -1,293 +0,0 @@ ---- -title: ALTER COLUMN -summary: Use the ALTER COLUMN statement to set, change, or drop a column's DEFAULT constraint or to drop the NOT NULL constraint. -toc: true -docs_area: reference.sql ---- - -`ALTER COLUMN` is a subcommand of [`ALTER TABLE`](alter-table.html). You can use `ALTER COLUMN` to do the following: - -- Set, change, or drop a column's [`DEFAULT` constraint](default-value.html). -- Set or drop a column's [`NOT NULL` constraint](not-null.html). -- Set, change, or drop an [`ON UPDATE` expression](create-table.html#on-update-expressions). -- Change a column's [data type](data-types.html). - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -{{site.data.alerts.callout_info}} -Support for altering column types is [in preview](cockroachdb-feature-availability.html), with certain limitations. For details, see [Altering column data types](#altering-column-data-types). -{{site.data.alerts.end}} - -{% include {{ page.version.version }}/sql/combine-alter-table-commands.md %} - -## Synopsis - -
-{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_column.html %} -
- -## Required privileges - -The user must have the `CREATE` [privilege](security-reference/authorization.html#managing-privileges) on the table. - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table with the column to modify. | -| `column_name` | The name of the column to modify. | -| `SET DEFAULT a_expr` | The new [default value](default-value.html). | -| `typename` | The new [data type](data-types.html) you want to use.
Support for altering column types is [in preview](cockroachdb-feature-availability.html), with certain limitations. For details, see [Altering column data types](#altering-column-data-types). | -| `USING a_expr` | How to compute a new column value from the old column value. | - -## View schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Altering column data types - -Support for altering column data types is [in preview](cockroachdb-feature-availability.html), with [certain limitations](#limitations-on-altering-data-types). To enable column type altering, set the `enable_experimental_alter_column_type_general` [session variable](set-vars.html) to `true`. - -The following are equivalent in CockroachDB: - -- `ALTER TABLE ... ALTER ... TYPE` -- `ALTER TABLE ... ALTER COLUMN TYPE` -- `ALTER TABLE ... ALTER COLUMN SET DATA TYPE` - -For examples of `ALTER COLUMN TYPE`, [Examples](#convert-to-a-different-data-type). - -### Limitations on altering data types - -You cannot alter the data type of a column if: - -- The column is part of an [index](indexes.html). -- The column has [`CHECK` constraints](check.html). -- The column owns a [sequence](create-sequence.html). -- The `ALTER COLUMN TYPE` statement is part of a [combined `ALTER TABLE` statement](alter-table.html#subcommands). -- The `ALTER COLUMN TYPE` statement is inside an [explicit transaction](begin-transaction.html). - -{{site.data.alerts.callout_info}} -Most `ALTER COLUMN TYPE` changes are finalized asynchronously. Schema changes on the table with the altered column may be restricted, and writes to the altered column may be rejected until the schema change is finalized. -{{site.data.alerts.end}} - -## Examples - -### Set or change a `DEFAULT` value - -Setting the [`DEFAULT` value constraint](default-value.html) inserts the value when data's written to the table without explicitly defining the value for the column. If the column already has a `DEFAULT` value set, you can use this statement to change it. - -The following example inserts the Boolean value `true` whenever you inserted data to the `subscriptions` table without defining a value for the `newsletter` column. - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE subscriptions ALTER COLUMN newsletter SET DEFAULT true; -~~~ - -### Remove `DEFAULT` constraint - -If the column has a defined [`DEFAULT` value](default-value.html), you can remove the constraint, which means the column will no longer insert a value by default if one is not explicitly defined for the column. - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE subscriptions ALTER COLUMN newsletter DROP DEFAULT; -~~~ - -### Set `NOT NULL` constraint - -Setting the [`NOT NULL` constraint](not-null.html) specifies that the column cannot contain `NULL` values. - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE subscriptions ALTER COLUMN newsletter SET NOT NULL; -~~~ - -### Remove `NOT NULL` constraint - -If the column has the [`NOT NULL` constraint](not-null.html) applied to it, you can remove the constraint, which means the column becomes optional and can have *NULL* values written into it. - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE subscriptions ALTER COLUMN newsletter DROP NOT NULL; -~~~ - -### Convert a computed column into a regular column - -{% include {{ page.version.version }}/computed-columns/convert-computed-column.md %} - -### Alter the formula for a computed column - -{% include {{ page.version.version }}/computed-columns/alter-computed-column.md %} - -### Convert to a different data type - -The [TPC-C](performance-benchmarking-with-tpcc-small.html) database has a `customer` table with a column `c_credit_lim` of type `DECIMAL(10,2)`: - -{% include_cached copy-clipboard.html %} -~~~ sql -> WITH x AS (SHOW COLUMNS FROM customer) SELECT column_name, data_type FROM x WHERE column_name='c_credit_lim'; -~~~ - -~~~ - column_name | data_type ----------------+---------------- - c_credit_lim | DECIMAL(10,2) -(1 row) -~~~ - -To change the data type from `DECIMAL` to `STRING`: - -1. Set the `enable_experimental_alter_column_type_general` [session variable](set-vars.html) to `true`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET enable_experimental_alter_column_type_general = true; - ~~~ - -1. Alter the column type: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER TABLE customer ALTER c_credit_lim TYPE STRING; - ~~~ - - ~~~ - NOTICE: ALTER COLUMN TYPE changes are finalized asynchronously; further schema changes on this table may be restricted until the job completes; some writes to the altered column may be rejected until the schema change is finalized - ~~~ - -1. Verify the type: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > WITH x AS (SHOW COLUMNS FROM customer) SELECT column_name, data_type FROM x WHERE column_name='c_credit_lim'; - ~~~ - - ~~~ - column_name | data_type - ---------------+------------ - c_credit_lim | STRING - (1 row) - ~~~ - - -### Change a column type's precision - -The [TPC-C](performance-benchmarking-with-tpcc-small.html) `customer` table contains a column `c_balance` of type `DECIMAL(12,2)`: - -{% include_cached copy-clipboard.html %} -~~~ sql -> WITH x AS (SHOW COLUMNS FROM customer) SELECT column_name, data_type FROM x WHERE column_name='c_balance'; -~~~ - -~~~ - column_name | data_type ---------------+---------------- - c_balance | DECIMAL(12,2) -(1 row) -~~~ - -To increase the precision of the `c_balance` column from `DECIMAL(12,2)` to `DECIMAL(14,2)`: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE customer ALTER c_balance TYPE DECIMAL(14,2); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> WITH x AS (SHOW COLUMNS FROM customer) SELECT column_name, data_type FROM x WHERE column_name='c_balance'; -~~~ - -~~~ - column_name | data_type ---------------+---------------- - c_balance | DECIMAL(14,2) -(1 row) -~~~ - -### Change a column's type using an expression - -You can change the data type of a column and create a new, computed value from the old column values, with a [`USING` clause](#parameters). For example: - -{% include_cached copy-clipboard.html %} -~~~ sql -> WITH x AS (SHOW COLUMNS FROM customer) SELECT column_name, data_type FROM x WHERE column_name='c_discount'; -~~~ - -~~~ - column_name | data_type ---------------+--------------- - c_discount | DECIMAL(4,4) -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT c_discount FROM customer LIMIT 10; -~~~ - -~~~ - c_discount --------------- - 0.1569 - 0.4629 - 0.2932 - 0.0518 - 0.3922 - 0.1106 - 0.0622 - 0.4916 - 0.3072 - 0.0316 -(10 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE customer ALTER c_discount TYPE STRING USING ((c_discount*100)::DECIMAL(4,2)::STRING || ' percent'); -~~~ - -~~~ -NOTICE: ALTER COLUMN TYPE changes are finalized asynchronously; further schema changes on this table may be restricted until the job completes; some writes to the altered column may be rejected until the schema change is finalized -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> WITH x AS (SHOW COLUMNS FROM customer) SELECT column_name, data_type FROM x WHERE column_name='c_discount'; -~~~ - -~~~ - column_name | data_type ---------------+------------ - c_discount | STRING -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT c_discount FROM customer LIMIT 10; -~~~ - -~~~ - c_discount ------------------ - 15.69 percent - 46.29 percent - 29.32 percent - 5.18 percent - 39.22 percent - 11.06 percent - 6.22 percent - 49.16 percent - 30.72 percent - 3.16 percent -(10 rows) -~~~ - -## See also - -- [Constraints](constraints.html) -- [`ADD CONSTRAINT`](add-constraint.html) -- [`DROP CONSTRAINT`](drop-constraint.html) -- [`ALTER TABLE`](alter-table.html) -- [`SHOW JOBS`](show-jobs.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v22.1/alter-database.md b/src/current/v22.1/alter-database.md deleted file mode 100644 index 51c1b774f93..00000000000 --- a/src/current/v22.1/alter-database.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -title: ALTER DATABASE -summary: Use the ALTER DATABASE statement to change an existing database. -toc: false -docs_area: reference.sql ---- - -The `ALTER DATABASE` [statement](sql-statements.html) applies a schema change to a database. For information on using `ALTER DATABASE`, see the pages for its relevant [subcommands](#subcommands). - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Subcommands - -Subcommand | Description ------------|------------ -[`CONFIGURE ZONE`](configure-zone.html) | [Configure replication zones](configure-replication-zones.html) for a database. -[`OWNER TO`](owner-to.html) | Change the owner of a database. -[`RENAME TO`](rename-database.html) | Change the name of a database. -[`ADD REGION`](add-region.html) | Add a region to a [multi-region database](multiregion-overview.html). -[`DROP REGION`](drop-region.html) | Drop a region from a [multi-region database](multiregion-overview.html). -[`SET PRIMARY REGION`](set-primary-region.html) | Set the primary region of a [multi-region database](multiregion-overview.html). -[`ADD SUPER REGION`](add-super-region.html) | **New in v22.1:** Add a super region made up of a set of [database regions](multiregion-overview.html#super-regions) such that data from [regional tables](regional-tables.html) will be stored in only those regions. -[`DROP SUPER REGION`](drop-super-region.html) | **New in v22.1:** Drop a super region made up of a set of [database regions](multiregion-overview.html#super-regions). -[`ALTER SUPER REGION`](alter-super-region.html) | **New in v22.1:** Alter an existing [super region](multiregion-overview.html#super-regions) to include a different set of regions. A super region is made up of a set of regions added with [`ADD REGION`](add-region.html) such that data from [regional tables](regional-tables.html) will be stored in only those regions. -[`SET {session variable}`](alter-role.html#set-default-session-variable-values-for-a-specific-database) | Set the default session variable values for the database. This syntax is identical to [`ALTER ROLE ALL IN DATABASE SET {session variable}`](alter-role.html). -`RESET {session variable}` | Reset the default session variable values for the database to the system defaults. This syntax is identical to [`ALTER ROLE ALL IN DATABASE RESET {session variable}`](alter-role.html). -[`SURVIVE {ZONE,REGION} FAILURE`](survive-failure.html) | Add a survival goal to a [multi-region database](multiregion-overview.html). - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} diff --git a/src/current/v22.1/alter-default-privileges.md b/src/current/v22.1/alter-default-privileges.md deleted file mode 100644 index a64d1ebcb67..00000000000 --- a/src/current/v22.1/alter-default-privileges.md +++ /dev/null @@ -1,356 +0,0 @@ ---- -title: ALTER DEFAULT PRIVILEGES -summary: The ALTER DEFAULT PRIVILEGES statement alters the default privileges for roles in the current database. -keywords: reflection -toc: true -docs_area: reference.sql ---- - - The `ALTER DEFAULT PRIVILEGES` [statement](sql-statements.html) changes the [default privileges](security-reference/authorization.html#default-privileges) on objects created by [users/roles](security-reference/authorization.html#roles) in the current database. - -{{site.data.alerts.callout_info}} -The creator of an object is also the object's [owner](security-reference/authorization.html#object-ownership). Any roles that are members of the owner role have `ALL` privileges on the object. Altering the default privileges of objects created by a role does not affect that role's privileges as the object's owner. The default privileges granted to other users/roles are always in addition to the ownership (i.e., `ALL`) privileges given to the creator of the object. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -If you grant a default privilege to a user/role for all objects created by a specific user/role, neither of the users/roles can be dropped until the default privilege is revoked. - -For an example, see [Grant default privileges to a specific role](#grant-default-privileges-to-a-specific-role). -{{site.data.alerts.end}} - -## Synopsis - -
-{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_default_privileges.html %} -
- -### Parameters - -Parameter | Description -----------|------------ -`FOR ROLE name`/`FOR USER name` | Alter the default privileges on objects created by a specific role/user, or a list of roles/users. -`FOR ALL ROLES` | Alter the default privileges on objects created by all users/roles. -`GRANT ...` | Grant a default privilege or list of privileges on all objects of the specified type to a role/user, or a list of roles/users. -`REVOKE ...` | Revoke a default privilege or list of privileges on all objects of the specified type from a role/user, or a list of roles/users. -`IN SCHEMA qualifiable_schema_name` | **New in v22.1:** If specified, the default privileges are altered for objects created in that schema. If an object has default privileges specified at the database and at the schema level, the union of the default privileges is taken. - -{{site.data.alerts.callout_info}} -If you do not specify a `FOR ...` clause, CockroachDB alters the default privileges on objects created by the current user. -{{site.data.alerts.end}} - -## Required privileges - -- To run `ALTER DEFAULT PRIVILEGES FOR ALL ROLES`, the user must be a member of the [`admin`](security-reference/authorization.html#admin-role) role. -- To alter the default privileges on objects created by a specific role, the user must be a member of that role. - -## Examples - -### Grant default privileges to a specific role - -Run the following statements as a member of the `admin` role, with `ALL` privileges: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE ROLE cockroachlabs WITH LOGIN; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT CREATE ON DATABASE defaultdb TO cockroachlabs; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE USER max WITH LOGIN; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW DEFAULT PRIVILEGES FOR ROLE cockroachlabs; -~~~ - -~~~ - role | for_all_roles | object_type | grantee | privilege_type -----------------+---------------+-------------+---------------+----------------- - cockroachlabs | false | schemas | cockroachlabs | ALL - cockroachlabs | false | sequences | cockroachlabs | ALL - cockroachlabs | false | tables | cockroachlabs | ALL - cockroachlabs | false | types | cockroachlabs | ALL - cockroachlabs | false | types | public | USAGE -(5 rows) -~~~ - -In the same database, run the following statements as the `cockroachlabs` user: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER DEFAULT PRIVILEGES FOR ROLE cockroachlabs GRANT SELECT ON TABLES TO max; -~~~ - -{{site.data.alerts.callout_info}} -Because `cockroachlabs` is the current user, the previous statement is equivalent to `ALTER DEFAULT PRIVILEGES GRANT SELECT ON TABLES TO max;`. -{{site.data.alerts.end}} - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW DEFAULT PRIVILEGES; -~~~ - -~~~ - role | for_all_roles | object_type | grantee | privilege_type -----------------+---------------+-------------+---------------+----------------- - cockroachlabs | false | schemas | cockroachlabs | ALL - cockroachlabs | false | sequences | cockroachlabs | ALL - cockroachlabs | false | tables | cockroachlabs | ALL - cockroachlabs | false | tables | max | SELECT - cockroachlabs | false | types | cockroachlabs | ALL - cockroachlabs | false | types | public | USAGE -(6 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE albums ( - id UUID PRIMARY KEY, - title STRING, - length DECIMAL, - tracklist JSONB -); -~~~ - -In the same database, run the following statements as the `max` user: - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP TABLE albums; -~~~ - -~~~ -ERROR: user max does not have DROP privilege on relation albums -SQLSTATE: 42501 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM albums; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden ---------------+-----------+-------------+----------------+-----------------------+-----------+------------ - id | UUID | false | NULL | | {primary} | false - title | STRING | true | NULL | | {primary} | false - length | DECIMAL | true | NULL | | {primary} | false - tracklist | JSONB | true | NULL | | {primary} | false -(4 rows) -~~~ - -Because `max` has default `SELECT` privileges on all tables created by `cockroachlabs`, neither user/role can be dropped until all privileges are revoked. - -To see this, run the following statements as a member of the `admin` role, with `ALL` privileges: - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP USER max; -~~~ - -~~~ -ERROR: cannot drop role/user max: grants still exist on defaultdb.public.albums -SQLSTATE: 2BP01 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP USER cockroachlabs; -~~~ - -~~~ -ERROR: cannot drop role/user cockroachlabs: grants still exist on defaultdb, defaultdb.public.albums -SQLSTATE: 2BP01 -~~~ - -### Revoke default privileges from a specific role - -Run the following statements as the `cockroachlabs` user: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW DEFAULT PRIVILEGES; -~~~ - -~~~ - role | for_all_roles | object_type | grantee | privilege_type -----------------+---------------+-------------+---------------+----------------- - cockroachlabs | false | schemas | cockroachlabs | ALL - cockroachlabs | false | sequences | cockroachlabs | ALL - cockroachlabs | false | tables | cockroachlabs | ALL - cockroachlabs | false | tables | max | SELECT - cockroachlabs | false | types | cockroachlabs | ALL - cockroachlabs | false | types | public | USAGE -(6 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER DEFAULT PRIVILEGES FOR ROLE cockroachlabs REVOKE SELECT ON TABLES FROM max; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW DEFAULT PRIVILEGES; -~~~ - -~~~ - role | for_all_roles | object_type | grantee | privilege_type -----------------+---------------+-------------+---------------+----------------- - cockroachlabs | false | schemas | cockroachlabs | ALL - cockroachlabs | false | sequences | cockroachlabs | ALL - cockroachlabs | false | tables | cockroachlabs | ALL - cockroachlabs | false | types | cockroachlabs | ALL - cockroachlabs | false | types | public | USAGE -(5 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE tracks ( - id UUID PRIMARY KEY, - album_id UUID, - title STRING, - length DECIMAL -); -~~~ - -In the same database, run the following statements as the `max` user: - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP TABLE albums; -~~~ - -~~~ -ERROR: user max does not have DROP privilege on relation albums -SQLSTATE: 42501 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM albums; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden ---------------+-----------+-------------+----------------+-----------------------+-----------+------------ - id | UUID | false | NULL | | {primary} | false - title | STRING | true | NULL | | {primary} | false - length | DECIMAL | true | NULL | | {primary} | false - tracklist | JSONB | true | NULL | | {primary} | false -(4 rows) -~~~ - -`max` still has `SELECT` privileges on `albums` because when `cockroachlabs` created `albums`, `max` was granted default `SELECT` privileges on all tables created by `cockroachlabs`. - -{% include_cached copy-clipboard.html %} -~~~ sql -> REVOKE SELECT ON TABLE albums FROM max; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP TABLE tracks; -~~~ - -~~~ -ERROR: user max does not have DROP privilege on relation tracks -SQLSTATE: 42501 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM tracks; -~~~ - -~~~ -ERROR: user max has no privileges on relation tracks -SQLSTATE: 42501 -~~~ - -`cockroachlabs` created the `tracks` table after revoking default `SELECT` privileges from `max`. As a result, `max` never had `SELECT` privileges on `tracks`. - -Because `max` has no default privileges, the user can now be dropped: - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP USER max; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW USERS; -~~~ - -~~~ - username | options | member_of -----------------+---------+------------ - admin | | {} - cockroachlabs | | {} - root | | {admin} -(3 rows) -~~~ - -### Grant default privileges for all roles - -Run the following statements as a member of the `admin` role, with `ALL` privileges: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER DEFAULT PRIVILEGES FOR ALL ROLES GRANT SELECT ON TABLES TO public; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW DEFAULT PRIVILEGES FOR ALL ROLES; -~~~ - -~~~ - role | for_all_roles | object_type | grantee | privilege_type --------+---------------+-------------+---------+----------------- - NULL | true | tables | public | SELECT - NULL | true | types | public | USAGE -(2 rows) -~~~ - -In the same database, run the following statements as any two different users: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE discographies ( - id UUID PRIMARY KEY, - artist STRING, - total_length DECIMAL -); -~~~ - -{{site.data.alerts.callout_info}} -[`CREATE TABLE`](create-table.html) requires the `CREATE` privilege on the database. -{{site.data.alerts.end}} - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM discographies; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden ----------------+-----------+-------------+----------------+-----------------------+-----------+------------ - id | UUID | false | NULL | | {primary} | false - artist | STRING | true | NULL | | {primary} | false - total_length | DECIMAL | true | NULL | | {primary} | false -(3 rows) -~~~ - -## See also - -- [`SHOW DEFAULT PRIVILEGES`](show-default-privileges.html) -- [SQL Statements](sql-statements.html) -- [Default Privileges](security-reference/authorization.html#default-privileges) diff --git a/src/current/v22.1/alter-index.md b/src/current/v22.1/alter-index.md deleted file mode 100644 index 4f0a234188a..00000000000 --- a/src/current/v22.1/alter-index.md +++ /dev/null @@ -1,75 +0,0 @@ ---- -title: ALTER INDEX -summary: Use the ALTER INDEX statement to change an existing index. -toc: true -docs_area: reference.sql ---- - -The `ALTER INDEX` [statement](sql-statements.html) changes the definition of an index. For information on using `ALTER INDEX`, see the pages for its [subcommands](#subcommands). - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Subcommands - -Subcommand | Description ------------|------------ -[`CONFIGURE ZONE`](configure-zone.html) | [Configure replication zones](configure-replication-zones.html) for an index. -[`PARTITION BY`](partition-by.html) | Partition, re-partition, or un-partition an index. ([Enterprise-only](enterprise-licensing.html)). -[`RENAME TO`](rename-index.html) | Change the name of an index. -[`SPLIT AT`](split-at.html) | Force a [range split](architecture/distribution-layer.html#range-splits) at the specified row in the index. -[`UNSPLIT AT`](unsplit-at.html) | Remove a range split enforcement in the index. - -## View schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -{% include {{ page.version.version }}/sql/movr-statements-geo-partitioned-replicas.md %} - -### Rename an index - -{% include copy-clipboard.html %} -~~~ sql -> SHOW INDEXES FROM users; -~~~ - -~~~ -+------------+------------+------------+--------------+-------------+-----------+---------+----------+ -| table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit | -+------------+------------+------------+--------------+-------------+-----------+---------+----------+ -| users | primary | false | 1 | id | ASC | false | false | -| users | name_idx | true | 1 | name | ASC | false | false | -| users | name_idx | true | 2 | id | ASC | false | true | -+------------+------------+------------+--------------+-------------+-----------+---------+----------+ -(3 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> ALTER INDEX users@name_idx RENAME TO users_name_idx; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW INDEXES FROM users; -~~~ - -~~~ -+------------+----------------+------------+--------------+-------------+-----------+---------+----------+ -| table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit | -+------------+----------------+------------+--------------+-------------+-----------+---------+----------+ -| users | primary | false | 1 | id | ASC | false | false | -| users | users_name_idx | true | 1 | name | ASC | false | false | -| users | users_name_idx | true | 2 | id | ASC | false | true | -+------------+----------------+------------+--------------+-------------+-----------+---------+----------+ -(3 rows) -~~~ - -### Create a replication zone for a secondary index - -{% include {{ page.version.version }}/zone-configs/create-a-replication-zone-for-a-secondary-index.md %} - -### Split and unsplit an index - -For examples, see [Split an index](split-at.html#split-an-index) and [Unsplit an index](unsplit-at.html#unsplit-an-index). diff --git a/src/current/v22.1/alter-partition.md b/src/current/v22.1/alter-partition.md deleted file mode 100644 index 5faf3d8fb18..00000000000 --- a/src/current/v22.1/alter-partition.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: ALTER PARTITION -summary: Use the ALTER PARTITION statement to configure the replication zone for a partition. -toc: true -docs_area: reference.sql ---- - -The `ALTER PARTITION` [statement](sql-statements.html) is used to configure replication zones for [partitioning](partitioning.html). See the [`CONFIGURE ZONE`](configure-zone.html) subcommand for more details. - -{% include enterprise-feature.md %} - -## Synopsis - -
-{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_zone_partition.html %} -
- -## Required privileges - -The user must have the [`CREATE`](grant.html#supported-privileges) privilege on the table. - -## Parameters - - Parameter | Description ------------+------------- -`table_name` | The name of the [table](create-table.html) with the [replication zone configurations](configure-replication-zones.html) to modify. -`partition_name` | The name of the [partition](partitioning.html) with the [replication zone configurations](configure-replication-zones.html) to modify. -`index_name` | The name of the [index](indexes.html) with the [replication zone configurations](configure-replication-zones.html) to modify. -`variable` | The name of the [variable](#variables) to change. -`value` | The value of the variable to change. - -### Variables - -{% include {{ page.version.version }}/zone-configs/variables.md %} - -## Examples - -{% include {{ page.version.version }}/sql/movr-statements-geo-partitioned-replicas.md %} - -### Create a replication zone for a partition - -{% include {{ page.version.version }}/zone-configs/create-a-replication-zone-for-a-table-partition.md hide-enterprise-warning="true" %} diff --git a/src/current/v22.1/alter-primary-key.md b/src/current/v22.1/alter-primary-key.md deleted file mode 100644 index 07cb556fcb3..00000000000 --- a/src/current/v22.1/alter-primary-key.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -title: ALTER PRIMARY KEY -summary: Use the ALTER PRIMARY KEY statement to change the primary key of a table. -toc: true -docs_area: reference.sql ---- - -The `ALTER PRIMARY KEY` [statement](sql-statements.html) is a subcommand of [`ALTER TABLE`](alter-table.html) that can be used to change the [primary key](primary-key.html) of a table. - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Watch the demo - -{% include_cached youtube.html video_id="MPx-LXY2D-c" %} - -## Details - -- You cannot change the primary key of a table that is currently undergoing a primary key change, or any other [schema change](online-schema-changes.html). - -- `ALTER PRIMARY KEY` might need to rewrite multiple indexes, which can make it an expensive operation. - -- When you change a primary key with `ALTER PRIMARY KEY`, the old primary key index becomes a [`UNIQUE`](unique.html) secondary index. This helps optimize the performance of queries that still filter on the old primary key column. - -- `ALTER PRIMARY KEY` does not alter the [partitions](partitioning.html) on a table or its indexes, even if a partition is defined on [a column in the original primary key](partitioning.html#partition-using-primary-key). If you alter the primary key of a partitioned table, you must update the table partition accordingly. - -- The secondary index created by `ALTER PRIMARY KEY` will not be partitioned, even if a partition is defined on [a column in the original primary key](partitioning.html#partition-using-primary-key). To ensure that the table is partitioned correctly, you must create a partition on the secondary index, or drop the secondary index. - -- Any new primary key column set by `ALTER PRIMARY KEY` must have an existing [`NOT NULL` constraint](not-null.html). To add a `NOT NULL` constraint to an existing column, use [`ALTER TABLE ... ALTER COLUMN ... SET NOT NULL`](alter-column.html#set-not-null-constraint). - -{{site.data.alerts.callout_success}} -To change an existing primary key without creating a secondary index from that primary key, use [`DROP CONSTRAINT ... PRIMARY KEY`/`ADD CONSTRAINT ... PRIMARY KEY`](add-constraint.html#changing-primary-keys-with-add-constraint-primary-key). For examples, see the [`ADD CONSTRAINT`](add-constraint.html#examples) and [`DROP CONSTRAINT`](drop-constraint.html#examples) pages. -{{site.data.alerts.end}} - -## Synopsis - -
-{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_primary_key.html %} -
- -## Parameters - - Parameter | Description ------------|------------- - `table_name` | The name of the table with the primary key that you want to modify. - `index_params` | The name of the column(s) that you want to use for the primary key. These columns replace the current primary key column(s). - `USING HASH` | Creates a [hash-sharded index](hash-sharded-indexes.html). - -## Required privileges - -The user must have the `CREATE` [privilege](security-reference/authorization.html#managing-privileges) on a table to alter its primary key. - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -### Alter a single-column primary key - -Suppose that you are storing the data for users of your application in a table called `users`, defined by the following `CREATE TABLE` statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE users ( - name STRING PRIMARY KEY, - email STRING -); -~~~ - -The primary key of this table is on the `name` column. This is a poor choice, as some users likely have the same name, and all primary keys enforce a `UNIQUE` constraint on row values of the primary key column. Per our [best practices](performance-best-practices-overview.html#use-uuid-to-generate-unique-ids), you should instead use a `UUID` for single-column primary keys, and populate the rows of the table with generated, unique values. - -You can add a column and change the primary key with a couple of `ALTER TABLE` statements: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE users ADD COLUMN id UUID NOT NULL DEFAULT gen_random_uuid(); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE users ALTER PRIMARY KEY USING COLUMNS (id); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE users; -~~~ - -~~~ - table_name | create_statement --------------+-------------------------------------------------- - users | CREATE TABLE users ( - | name STRING NOT NULL, - | email STRING NULL, - | id UUID NOT NULL DEFAULT gen_random_uuid(), - | CONSTRAINT users_pkey PRIMARY KEY (id ASC), - | UNIQUE INDEX users_name_key (name ASC) - | ) -(1 row) -~~~ - -### Alter an existing primary key to use hash sharding - -{% include {{page.version.version}}/performance/alter-primary-key-hash-sharded.md %} - -Note that the old primary key index becomes a secondary index, in this case, `users_name_key`. If you do not want the old primary key to become a secondary index when changing a primary key, you can use [`DROP CONSTRAINT`](drop-constraint.html)/[`ADD CONSTRAINT`](add-constraint.html) instead. - -## See also - -- [Constraints](constraints.html) -- [`ADD CONSTRAINT`](add-constraint.html) -- [`DROP CONSTRAINT`](drop-constraint.html) -- [`ALTER TABLE`](alter-table.html) -- [`SHOW JOBS`](show-jobs.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v22.1/alter-range-relocate.md b/src/current/v22.1/alter-range-relocate.md deleted file mode 100644 index 1b74055cd87..00000000000 --- a/src/current/v22.1/alter-range-relocate.md +++ /dev/null @@ -1,228 +0,0 @@ ---- -title: ALTER RANGE ... RELOCATE -summary: Use the ALTER RANGE ... RELOCATE statement to move a lease or replica between stores in an emergency situation. -toc: true -docs_area: reference.sql ---- - -{% include_cached new-in.html version="v22.1" %} The `ALTER RANGE ... RELOCATE` statement is a subcommand of [`ALTER RANGE`](alter-range.html). It is used to move a lease or [replica](architecture/overview.html#architecture-replica) between [stores](cockroach-start.html#store). This is helpful in an emergency situation to relocate data in the cluster. - -{{site.data.alerts.callout_danger}} -Most users should not need to use this statement; it is for use in emergency situations. If you are in an emergency situation where you think using this statement may help, Cockroach Labs recommends contacting [support](support-resources.html). -{{site.data.alerts.end}} - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Synopsis - -
-{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_range_relocate.html %} -
- -## Required privileges - -To alter a range and move a lease or replica between stores, the user must have one of the following: - -- Membership to the [`admin`](security-reference/authorization.html#admin-role) role for the cluster. - -## Examples - -### Find the cluster store IDs - -To use `ALTER RANGE ... RELOCATE`, you will need to know your cluster's store IDs. To get the store IDs, run the following statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT store_id FROM crdb_internal.kv_store_status; -~~~ - -~~~ - store_id ------------ - 1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -(9 rows) -~~~ - -### Find range ID and leaseholder information - -To use `ALTER RANGE ... RELOCATE`, you need to know how to find the range ID, leaseholder, and other information for a [table](show-ranges.html#show-ranges-for-a-table-primary-index), [index](show-ranges.html#show-ranges-for-an-index), or [database](show-ranges.html#show-ranges-for-a-database). You can find this information using the [`SHOW RANGES`](show-ranges.html) statement. - -For example, to get all range IDs, leaseholder store IDs, and leaseholder localities for the [`movr.users`](movr.html) table, use the following query: - -{% include_cached copy-clipboard.html %} -~~~ sql -WITH user_info AS (SHOW RANGES FROM TABLE users) SELECT range_id, lease_holder, lease_holder_locality FROM user_info; -~~~ - -~~~ - range_id | lease_holder | lease_holder_locality ------------+--------------+--------------------------- - 70 | 3 | region=us-east1,az=d - 67 | 9 | region=europe-west1,az=d - 66 | 3 | region=us-east1,az=d - 65 | 3 | region=us-east1,az=d - 69 | 3 | region=us-east1,az=d - 45 | 2 | region=us-east1,az=c - 50 | 2 | region=us-east1,az=c - 46 | 2 | region=us-east1,az=c - 49 | 2 | region=us-east1,az=c -(9 rows) -~~~ - - - -### Move the lease for a range to a specified store - -To move the lease for range ID 70 to store ID 4: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER RANGE 70 RELOCATE LEASE TO 4; -~~~ - -~~~ - range_id | pretty | result ------------+------------+--------- - 70 | /Table/106 | ok -(1 row) -~~~ - -### Move the lease for all of a table's ranges to a store - -To move the leases for all data in the [`movr.users`](movr.html) table to a specific store: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER RANGE RELOCATE LEASE TO 2 FOR SELECT range_id from crdb_internal.ranges where table_name = 'users' -~~~ - -~~~ - range_id | pretty | result ------------+----------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - 70 | /Table/106 | unable to find store 2 in range r70:/Table/106{-/1/"amsterdam"/"\xb333333@\x00\x80\x00\x00\x00\x00\x00\x00#"} [(n7,s7):1, (n3,s3):4, (n4,s4):5, next=6, gen=27] - 67 | /Table/106/1/"amsterdam"/"\xb333333@\x00\x80\x00\x00\x00\x00\x00\x00#" | unable to find store 2 in range r67:/Table/106/1/"{amsterdam"/"\xb333333@\x00\x80\x00\x00\x00\x00\x00\x00#"-boston"/"333333D\x00\x80\x00\x00\x00\x00\x00\x00\n"} [(n3,s3):4, (n9,s9):6, (n6,s6):7, next=8, gen=34, sticky=9223372036.854775807,2147483647] - 66 | /Table/106/1/"boston"/"333333D\x00\x80\x00\x00\x00\x00\x00\x00\n" | unable to find store 2 in range r66:/Table/106/1/"{boston"/"333333D\x00\x80\x00\x00\x00\x00\x00\x00\n"-los angeles"/"\x99\x99\x99\x99\x99\x99H\x00\x80\x00\x00\x00\x00\x00\x00\x1e"} [(n7,s7):1, (n3,s3):4, (n4,s4):5, next=6, gen=25, sticky=9223372036.854775807,2147483647] - 65 | /Table/106/1/"los angeles"/"\x99\x99\x99\x99\x99\x99H\x00\x80\x00\x00\x00\x00\x00\x00\x1e" | unable to find store 2 in range r65:/Table/106/1/"{los angeles"/"\x99\x99\x99\x99\x99\x99H\x00\x80\x00\x00\x00\x00\x00\x00\x1e"-new york"/"\x19\x99\x99\x99\x99\x99J\x00\x80\x00\x00\x00\x00\x00\x00\x05"} [(n7,s7):1, (n3,s3):4, (n4,s4):5, next=6, gen=25, sticky=9223372036.854775807,2147483647] - 69 | /Table/106/1/"new york"/"\x19\x99\x99\x99\x99\x99J\x00\x80\x00\x00\x00\x00\x00\x00\x05" | unable to find store 2 in range r69:/Table/106/1/"{new york"/"\x19\x99\x99\x99\x99\x99J\x00\x80\x00\x00\x00\x00\x00\x00\x05"-paris"/"\xcc\xcc\xcc\xcc\xcc\xcc@\x00\x80\x00\x00\x00\x00\x00\x00("} [(n9,s9):5, (n3,s3):4, (n4,s4):3, next=6, gen=29, sticky=9223372036.854775807,2147483647] - 45 | /Table/106/1/"paris"/"\xcc\xcc\xcc\xcc\xcc\xcc@\x00\x80\x00\x00\x00\x00\x00\x00(" | ok - 50 | /Table/106/1/"san francisco"/"\x80\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x19" | ok - 46 | /Table/106/1/"seattle"/"ffffffH\x00\x80\x00\x00\x00\x00\x00\x00\x14" | ok - 49 | /Table/106/1/"washington dc"/"L\xcc\xcc\xcc\xcc\xccL\x00\x80\x00\x00\x00\x00\x00\x00\x0f" | ok -(9 rows) -~~~ - -When it isn't possible to move a lease for a range to the specified store, the `result` column will show the message `unable to find store ...` as shown above. - -### Move a replica from one store to another store - -If you know the store where a range's replica is located, you can move it to another store: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER RANGE 45 RELOCATE FROM 2 to 4; -~~~ - -~~~ - range_id | pretty | result ------------+-----------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - 45 | /Table/106/1/"paris"/"\xcc\xcc\xcc\xcc\xcc\xcc@\x00\x80\x00\x00\x00\x00\x00\x00(" | removing learners from r45:/Table/106/1/"{paris"/"\xcc\xcc\xcc\xcc\xcc\xcc@\x00\x80\x00\x00\x00\x00\x00\x00("-san francisco"/"\x80\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x19"} [(n2,s2):1LEARNER, (n8,s8):2, (n5,s5):3, (n4,s4):4, next=5, gen=14, sticky=9223372036.854775807,2147483647]: change replicas of r45 failed: descriptor changed: [expected] r45:/Table/106/1/"{paris"/"\xcc\xcc\xcc\xcc\xcc\xcc@\x00\x80\x00\x00\x00\x00\x00\x00("-san francisco"/"\x80\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x19"} [(n2,s2):1LEARNER, (n8,s8):2, (n5,s5):3, (n4,s4):4, next=5, gen=14, sticky=9223372036.854775807,2147483647] != [actual] r45:/Table/106/1/"{paris"/"\xcc\xcc\xcc\xcc\xcc\xcc@\x00\x80\x00\x00\x00\x00\x00\x00("-san francisco"/"\x80\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x19"} [(n4,s4):4, (n8,s8):2, (n5,s5):3, next=5, gen=15, sticky=9223372036.854775807,2147483647] -(1 row) -~~~ - -### Move all of a table's replicas on one store to another store - -To move the replicas for all data in the [`movr.users`](movr.html) table on one store to another store: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER RANGE RELOCATE FROM 2 TO 7 FOR SELECT range_id from crdb_internal.ranges where table_name = 'users'; -~~~ - -~~~ - range_id | pretty | result ------------+----------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - 70 | /Table/106 | trying to add a voter to a store that already has a VOTER_FULL - 67 | /Table/106/1/"amsterdam"/"\xb333333@\x00\x80\x00\x00\x00\x00\x00\x00#" | trying to remove a replica that doesn't exist: {ChangeType:REMOVE_VOTER Target:n2,s2} - 66 | /Table/106/1/"boston"/"333333D\x00\x80\x00\x00\x00\x00\x00\x00\n" | trying to add a voter to a store that already has a VOTER_FULL - 65 | /Table/106/1/"los angeles"/"\x99\x99\x99\x99\x99\x99H\x00\x80\x00\x00\x00\x00\x00\x00\x1e" | trying to add a voter to a store that already has a VOTER_FULL - 69 | /Table/106/1/"new york"/"\x19\x99\x99\x99\x99\x99J\x00\x80\x00\x00\x00\x00\x00\x00\x05" | trying to remove a replica that doesn't exist: {ChangeType:REMOVE_VOTER Target:n2,s2} - 45 | /Table/106/1/"paris"/"\xcc\xcc\xcc\xcc\xcc\xcc@\x00\x80\x00\x00\x00\x00\x00\x00(" | trying to remove a replica that doesn't exist: {ChangeType:REMOVE_VOTER Target:n2,s2} - 50 | /Table/106/1/"san francisco"/"\x80\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x19" | change replicas of r50 failed: descriptor changed: [expected] r50:/Table/106/1/"s{an francisco"/"\x80\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x19"-eattle"/"ffffffH\x00\x80\x00\x00\x00\x00\x00\x00\x14"} [(n2,s2):1, (n8,s8):2, (n5,s5):3, (n7,s7):4LEARNER, next=5, gen=12, sticky=9223372036.854775807,2147483647] != [actual] r50:/Table/106/1/"s{an francisco"/"\x80\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x19"-eattle"/"ffffffH\x00\x80\x00\x00\x00\x00\x00\x00\x14"} [(n2,s2):1, (n8,s8):2, (n5,s5):3, next=5, gen=13, sticky=9223372036.854775807,2147483647] - 46 | /Table/106/1/"seattle"/"ffffffH\x00\x80\x00\x00\x00\x00\x00\x00\x14" | removing learners from r46:/Table/106/1/"{seattle"/"ffffffH\x00\x80\x00\x00\x00\x00\x00\x00\x14"-washington dc"/"L\xcc\xcc\xcc\xcc\xccL\x00\x80\x00\x00\x00\x00\x00\x00\x0f"} [(n2,s2):1LEARNER, (n8,s8):2, (n5,s5):3, (n7,s7):4, next=5, gen=14, sticky=9223372036.854775807,2147483647]: change replicas of r46 failed: descriptor changed: [expected] r46:/Table/106/1/"{seattle"/"ffffffH\x00\x80\x00\x00\x00\x00\x00\x00\x14"-washington dc"/"L\xcc\xcc\xcc\xcc\xccL\x00\x80\x00\x00\x00\x00\x00\x00\x0f"} [(n2,s2):1LEARNER, (n8,s8):2, (n5,s5):3, (n7,s7):4, next=5, gen=14, sticky=9223372036.854775807,2147483647] != [actual] r46:/Table/106/1/"{seattle"/"ffffffH\x00\x80\x00\x00\x00\x00\x00\x00\x14"-washington dc"/"L\xcc\xcc\xcc\xcc\xccL\x00\x80\x00\x00\x00\x00\x00\x00\x0f"} [(n7,s7):4, (n8,s8):2, (n5,s5):3, next=5, gen=15, sticky=9223372036.854775807,2147483647] - 49 | /Table/106/1/"washington dc"/"L\xcc\xcc\xcc\xcc\xccL\x00\x80\x00\x00\x00\x00\x00\x00\x0f" | ok -(9 rows) -~~~ - -See the `result` column in the output for the status of the operation. If it's `ok`, the replica was moved with no issues. Other messages will indicate whether the target store is already full (`VOTER_FULL`), or if the replica you're trying to remove doesn't exist. - -### Move all of a range's voting replicas from one store to another store - -To move all of a range's voting replicas from one store to another store: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER RANGE RELOCATE VOTERS FROM 7 TO 2 FOR SELECT range_id from crdb_internal.ranges where table_name = 'users'; -~~~ - -~~~ - range_id | pretty | result ------------+----------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - 70 | /Table/106 | ok - 67 | /Table/106/1/"amsterdam"/"\xb333333@\x00\x80\x00\x00\x00\x00\x00\x00#" | trying to remove a replica that doesn't exist: {ChangeType:REMOVE_VOTER Target:n7,s7} - 66 | /Table/106/1/"boston"/"333333D\x00\x80\x00\x00\x00\x00\x00\x00\n" | removing learners from r66:/Table/106/1/"{boston"/"333333D\x00\x80\x00\x00\x00\x00\x00\x00\n"-los angeles"/"\x99\x99\x99\x99\x99\x99H\x00\x80\x00\x00\x00\x00\x00\x00\x1e"} [(n7,s7):1LEARNER, (n3,s3):4, (n4,s4):5, (n2,s2):6, next=7, gen=28, sticky=9223372036.854775807,2147483647]: change replicas of r66 failed: descriptor changed: [expected] r66:/Table/106/1/"{boston"/"333333D\x00\x80\x00\x00\x00\x00\x00\x00\n"-los angeles"/"\x99\x99\x99\x99\x99\x99H\x00\x80\x00\x00\x00\x00\x00\x00\x1e"} [(n7,s7):1LEARNER, (n3,s3):4, (n4,s4):5, (n2,s2):6, next=7, gen=28, sticky=9223372036.854775807,2147483647] != [actual] r66:/Table/106/1/"{boston"/"333333D\x00\x80\x00\x00\x00\x00\x00\x00\n"-los angeles"/"\x99\x99\x99\x99\x99\x99H\x00\x80\x00\x00\x00\x00\x00\x00\x1e"} [(n2,s2):6, (n3,s3):4, (n4,s4):5, next=7, gen=29, sticky=9223372036.854775807,2147483647] - 65 | /Table/106/1/"los angeles"/"\x99\x99\x99\x99\x99\x99H\x00\x80\x00\x00\x00\x00\x00\x00\x1e" | ok - 69 | /Table/106/1/"new york"/"\x19\x99\x99\x99\x99\x99J\x00\x80\x00\x00\x00\x00\x00\x00\x05" | trying to remove a replica that doesn't exist: {ChangeType:REMOVE_VOTER Target:n7,s7} - 45 | /Table/106/1/"paris"/"\xcc\xcc\xcc\xcc\xcc\xcc@\x00\x80\x00\x00\x00\x00\x00\x00(" | trying to remove a replica that doesn't exist: {ChangeType:REMOVE_VOTER Target:n7,s7} - 50 | /Table/106/1/"san francisco"/"\x80\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x19" | trying to add a voter to a store that already has a VOTER_FULL - 46 | /Table/106/1/"seattle"/"ffffffH\x00\x80\x00\x00\x00\x00\x00\x00\x14" | trying to add a voter to a store that already has a VOTER_FULL - 49 | /Table/106/1/"washington dc"/"L\xcc\xcc\xcc\xcc\xccL\x00\x80\x00\x00\x00\x00\x00\x00\x0f" | trying to add a voter to a store that already has a VOTER_FULL -(9 rows) -~~~ - -See the `result` column in the output for the status of the operation. If it's `ok`, the replica was moved with no issues. Other messages will indicate whether the target store is already full (`VOTER_FULL`), or if the replica you're trying to remove doesn't exist. - -### Move all of a range's non-voting replicas from one store to another store - -To move a range's [non-voting replicas](architecture/replication-layer.html#non-voting-replicas), use the statement below. - -{{site.data.alerts.callout_info}} -This statement will only have an effect on clusters that have non-voting replicas configured, such as [multiregion clusters](multiregion-overview.html). If your cluster is not a multiregion cluster, it doesn't do anything, and will display errors in the `result` field as shown below. -{{site.data.alerts.end}} - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER RANGE RELOCATE NONVOTERS FROM 7 TO 2 FOR SELECT range_id from crdb_internal.ranges where table_name = 'users'; -~~~ - -~~~ - range_id | pretty | result ------------+----------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------- - 70 | /Table/106 | type of replica being removed (VOTER_FULL) does not match expectation for change: {ChangeType:REMOVE_NON_VOTER Target:n7,s7} - 67 | /Table/106/1/"amsterdam"/"\xb333333@\x00\x80\x00\x00\x00\x00\x00\x00#" | trying to remove a replica that doesn't exist: {ChangeType:REMOVE_NON_VOTER Target:n7,s7} - 66 | /Table/106/1/"boston"/"333333D\x00\x80\x00\x00\x00\x00\x00\x00\n" | type of replica being removed (VOTER_FULL) does not match expectation for change: {ChangeType:REMOVE_NON_VOTER Target:n7,s7} - 65 | /Table/106/1/"los angeles"/"\x99\x99\x99\x99\x99\x99H\x00\x80\x00\x00\x00\x00\x00\x00\x1e" | type of replica being removed (VOTER_FULL) does not match expectation for change: {ChangeType:REMOVE_NON_VOTER Target:n7,s7} - 69 | /Table/106/1/"new york"/"\x19\x99\x99\x99\x99\x99J\x00\x80\x00\x00\x00\x00\x00\x00\x05" | trying to remove a replica that doesn't exist: {ChangeType:REMOVE_NON_VOTER Target:n7,s7} - 45 | /Table/106/1/"paris"/"\xcc\xcc\xcc\xcc\xcc\xcc@\x00\x80\x00\x00\x00\x00\x00\x00(" | trying to remove a replica that doesn't exist: {ChangeType:REMOVE_NON_VOTER Target:n7,s7} - 50 | /Table/106/1/"san francisco"/"\x80\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x19" | trying to remove a replica that doesn't exist: {ChangeType:REMOVE_NON_VOTER Target:n7,s7} - 46 | /Table/106/1/"seattle"/"ffffffH\x00\x80\x00\x00\x00\x00\x00\x00\x14" | trying to remove a replica that doesn't exist: {ChangeType:REMOVE_NON_VOTER Target:n7,s7} - 49 | /Table/106/1/"washington dc"/"L\xcc\xcc\xcc\xcc\xccL\x00\x80\x00\x00\x00\x00\x00\x00\x0f" | trying to remove a replica that doesn't exist: {ChangeType:REMOVE_NON_VOTER Target:n7,s7} -(9 rows) -~~~ - -## See also - -- [`ALTER RANGE`](alter-range.html) -- [`SHOW RANGES`](show-ranges.html) -- [`SHOW RANGE FOR ROW`](show-range-for-row.html) -- [Troubleshoot cluster setup](cluster-setup-troubleshooting.html) -- [Replication Layer](architecture/replication-layer.html) -- [Multiregion Capabilities Overview](multiregion-overview.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v22.1/alter-range.md b/src/current/v22.1/alter-range.md deleted file mode 100644 index 0d1b5a6a3d7..00000000000 --- a/src/current/v22.1/alter-range.md +++ /dev/null @@ -1,27 +0,0 @@ ---- -title: ALTER RANGE -summary: Use the ALTER RANGE statement to configure the replication zone for a system range. -toc: true -docs_area: reference.sql ---- - -The `ALTER RANGE` [statement](sql-statements.html) applies a [schema change](online-schema-changes.html) to a range. For information on using `ALTER RANGE`, see the pages for its [subcommands](#subcommands). - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Subcommands - -| Subcommand | Description | -|----------------------------------------------------------------------------------------+---------------------------------------------------------------------------------| -| [`CONFIGURE ZONE`](configure-zone.html) | [Configure replication zones](configure-replication-zones.html) for a database. | -| **New in v22.1:** [`RELOCATE`](alter-range-relocate.html) | Move a lease or replica between stores in an emergency situation. | - -## See also - -- [Configure Replication Zones](configure-replication-zones.html) -- [Multiregion Capabilities Overview](multiregion-overview.html) -- [Troubleshoot cluster setup](cluster-setup-troubleshooting.html) -- [Replication Layer](architecture/replication-layer.html) -- [`ALTER RANGE ... CONFIGURE ZONE`](configure-zone.html) -- [`ALTER RANGE ... RELOCATE`](alter-range-relocate.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v22.1/alter-role.md b/src/current/v22.1/alter-role.md deleted file mode 100644 index 1dad2ea32ec..00000000000 --- a/src/current/v22.1/alter-role.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -title: ALTER ROLE -summary: The ALTER ROLE statement can be used to add or change a role's password. -toc: true -docs_area: reference.sql ---- - -Use the `ALTER ROLE` [statement](sql-statements.html) to add, change, or remove a [role's](create-role.html) password, change the role options for a role, and set default [session variable](set-vars.html) values for a role. - -{{site.data.alerts.callout_info}} -Since the keywords `ROLE` and `USER` can now be used interchangeably in SQL statements for enhanced PostgreSQL compatibility, `ALTER ROLE` is now an alias for [`ALTER USER`](alter-user.html). -{{site.data.alerts.end}} - -## Considerations - -- Password creation and alteration is supported only in secure clusters. - -## Required privileges - -- To alter an [`admin` role](security-reference/authorization.html#admin-role), the user must be a member of the `admin` role. -- To alter other roles, the user must be a member of the `admin` role or have the [`CREATEROLE`](create-role.html#create-a-role-that-can-create-other-roles-and-manage-authentication-methods-for-the-new-roles) role option set. - -## Synopsis - -
-{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_role.html %} -
- -## Parameters - - - -Parameter | Description -----------|------------- -`role_name` | The name of the role to alter. -`WITH role_option` | Apply a [role option](#role-options) to the role. -`SET {session variable}` | Set default [session variable](set-vars.html) values for a role. -`RESET {session variable}`
`RESET ALL` | Reset one session variable or all session variables to the default value. -`IN DATABASE database_name` | Specify a database for which to apply session variable defaults.
When `IN DATABASE` is not specified, the default session variable values apply for a role in all databases.
Note that, in order for a session to initialize session variable values to database defaults, the database must be specified as a [connection parameter](connection-parameters.html). Database default values will not appear if the database is set after connection with `USE `/`SET database=`. -`ROLE ALL ...`/`USER ALL ...` | Apply session variable settings to all roles.
Exception: The `root` user is exempt from session variable settings. - -### Role options - -Role option | Description -------------|------------- -`CANCELQUERY`/`NOCANCELQUERY` | Allow or disallow a role to cancel [queries](cancel-query.html) and [sessions](cancel-session.html) of other roles. Without this role option, roles can only cancel their own queries and sessions. Even with the `CANCELQUERY` role option, non-`admin` roles cannot cancel `admin` queries or sessions. This option should usually be combined with `VIEWACTIVITY` so that the role can view other roles' query and session information.

By default, the role option is set to `NOCANCELQUERY` for all non-`admin` roles. -`CONTROLCHANGEFEED`/`NOCONTROLCHANGEFEED` | Allow or disallow a role to run [`CREATE CHANGEFEED`](create-changefeed.html) on tables they have `SELECT` privileges on.

By default, the role option is set to `NOCONTROLCHANGEFEED` for all non-`admin` roles. -`CONTROLJOB`/`NOCONTROLJOB` | Allow or disallow a role to [pause](pause-job.html), [resume](resume-job.html), and [cancel](cancel-job.html) jobs. Non-`admin` roles cannot control jobs created by `admin` roles.

By default, the role option is set to `NOCONTROLJOB` for all non-`admin` roles. -`CREATEDB`/`NOCREATEDB` | Allow or disallow a role to [create](create-database.html) or [rename](rename-database.html) a database. The role is assigned as the owner of the database.

By default, the role option is set to `NOCREATEDB` for all non-`admin` roles. -`CREATELOGIN`/`NOCREATELOGIN` | Allow or disallow a role to manage authentication using the `WITH PASSWORD`, `VALID UNTIL`, and `LOGIN/NOLOGIN` role options.

By default, the role option is set to `NOCREATELOGIN` for all non-`admin` roles. -`CREATEROLE`/`NOCREATEROLE` | Allow or disallow the new role to [create](create-role.html), alter, and [drop](drop-role.html) other non-`admin` roles.

By default, the role option is set to `NOCREATEROLE` for all non-`admin` roles. -`LOGIN`/`NOLOGIN` | Allow or disallow a role to log in with one of the [client authentication methods](authentication.html#client-authentication). Setting the role option to `NOLOGIN` prevents the role from logging in using any authentication method. -`MODIFYCLUSTERSETTING`/`NOMODIFYCLUSTERSETTING` | Allow or disallow a role to modify the [cluster settings](cluster-settings.html) with the `sql.defaults` prefix.

By default, the role option is set to `NOMODIFYCLUSTERSETTING` for all non-`admin` roles. -`PASSWORD password`/`PASSWORD NULL` | The credential the role uses to [authenticate their access to a secure cluster](authentication.html#client-authentication). A password should be entered as a [string literal](sql-constants.html#string-literals). For compatibility with PostgreSQL, a password can also be entered as an identifier.

To prevent a role from using [password authentication](authentication.html#client-authentication) and to mandate [certificate-based client authentication](authentication.html#client-authentication), [set the password as `NULL`](create-role.html#prevent-a-role-from-using-password-authentication). -`SQLLOGIN`/`NOSQLLOGIN` | Allow or disallow a role to log in using the SQL CLI with one of the [client authentication methods](authentication.html#client-authentication). The role option to `NOSQLLOGIN` prevents the role from logging in using the SQL CLI with any authentication method while retaining the ability to log in to DB Console. It is possible to have both `NOSQLLOGIN` and `LOGIN` set for a role and `NOSQLLOGIN` takes precedence on restrictions.

Without any role options all login behavior is permitted. -`VALID UNTIL` | The date and time (in the [`timestamp`](timestamp.html) format) after which the [password](#parameters) is not valid. -`VIEWACTIVITY`/`NOVIEWACTIVITY` | Allow or disallow a role to see other roles' [queries](show-statements.html) and [sessions](show-sessions.html) using `SHOW STATEMENTS`, `SHOW SESSIONS`, and the [**Statements**](ui-statements-page.html) and [**Transactions**](ui-transactions-page.html) pages in the DB Console. `VIEWACTIVITY` also permits visibility of node hostnames and IP addresses in the DB Console. With `NOVIEWACTIVITY`, the `SHOW` commands show only the role's own data, and DB Console pages redact node hostnames and IP addresses.

By default, the role option is set to `NOVIEWACTIVITY` for all non-`admin` roles. -`VIEWCLUSTERSETTING` / `NOVIEWCLUSTERSETTING` | Allow or disallow a role to view the [cluster settings](cluster-settings.html) with `SHOW CLUSTER SETTING` or to access the [**Cluster Settings**](ui-debug-pages.html) page in the DB Console.

By default, the role option is set to `NOVIEWCLUSTERSETTING` for all non-`admin` roles. -`VIEWACTIVITYREDACTED`/`NOVIEWACTIVITYREDACTED` | Allow or disallow a role to see other roles' queries and sessions using `SHOW STATEMENTS`, `SHOW SESSIONS`, and the Statements and Transactions pages in the DB Console. With `VIEWACTIVITYREDACTED`, a user will not have access to the usage of statements diagnostics bundle (which can contain PII information) in the DB Console, and will not be able to list queries containing [constants](sql-constants.html) for other users when using the `listSessions` endpoint through the [Cluster API](cluster-api.html). It is possible to have both `VIEWACTIVITY` and `VIEWACTIVITYREDACTED`, and `VIEWACTIVITYREDACTED` takes precedence on restrictions. If the user has `VIEWACTIVITY` but doesn't have `VIEWACTIVITYREDACTED`, they will be able to see DB Console pages and have access to the statements diagnostics bundle.

By default, the role option is set to `NOVIEWACTIVITYREDACTED` for all non-`admin` roles. - -## Examples - -{{site.data.alerts.callout_info}} -The following statements are run by the `root` user that is a member of the `admin` role and has `ALL` privileges. -{{site.data.alerts.end}} - -### Allow a role to log in to the database using a password - -The following example allows a role to log in to the database with a [password](authentication.html#client-authentication): - -~~~ sql -root@:26257/defaultdb> ALTER ROLE carl WITH LOGIN PASSWORD 'An0ther$tr0nGpassW0rD' VALID UNTIL '2021-10-10'; -~~~ - -### Prevent a role from using password authentication - -The following statement prevents the user from using password authentication and mandates certificate-based [client authentication](authentication.html#client-authentication): - -{% include_cached copy-clipboard.html %} -~~~ sql -root@:26257/defaultdb> ALTER ROLE carl WITH PASSWORD NULL; -~~~ - -### Allow a role to create other roles and manage authentication methods for the new roles - -The following example allows the role to [create other roles](create-role.html) and [manage authentication methods](authentication.html#client-authentication) for them: - -~~~ sql -root@:26257/defaultdb> ALTER ROLE carl WITH CREATEROLE CREATELOGIN; -~~~ - -### Allow a role to create and rename databases - -The following example allows the role to [create](create-database.html) or [rename](rename-database.html) databases: - -~~~ sql -root@:26257/defaultdb> ALTER ROLE carl WITH CREATEDB; -~~~ - -### Allow a role to pause, resume, and cancel non-admin jobs - -The following example allows the role to [pause](pause-job.html), [resume](resume-job.html), and [cancel](cancel-job.html) jobs: - -~~~ sql -root@:26257/defaultdb> ALTER ROLE carl WITH CONTROLJOB; -~~~ - -### Allow a role to see and cancel non-admin queries and sessions - -The following example allows the role to cancel [queries](cancel-query.html) and [sessions](cancel-session.html) for other non-`admin` roles: - -~~~ sql -root@:26257/defaultdb> ALTER ROLE carl WITH CANCELQUERY VIEWACTIVITY; -~~~ - -### Allow a role to control changefeeds - -The following example allows the role to run [`CREATE CHANGEFEED`](create-changefeed.html): - -~~~ sql -root@:26257/defaultdb> ALTER ROLE carl WITH CONTROLCHANGEFEED; -~~~ - -### Allow a role to modify cluster settings - -The following example allows the role to modify [cluster settings](cluster-settings.html): - -~~~ sql -root@:26257/defaultdb> ALTER ROLE carl WITH MODIFYCLUSTERSETTING; -~~~ - -### Set default session variable values for a role - -In the following example, the `root` user creates a role named `max`, and sets the default value of the `timezone` [session variable](set-vars.html#supported-variables) for the `max` role. - -~~~ sql -root@:26257/defaultdb> CREATE ROLE max WITH LOGIN; -~~~ - -~~~ sql -root@:26257/defaultdb> ALTER ROLE max SET timezone = 'America/New_York'; -~~~ - -This statement does not affect the default `timezone` value for any role other than `max`: - -~~~ sql -root@:26257/defaultdb> SHOW timezone; -~~~ - -~~~ - timezone ------------- - UTC -(1 row) -~~~ - -To see the default `timezone` value for the `max` role, run the `SHOW` statement as a member of the `max` role: - -~~~ sql -max@:26257/defaultdb> SHOW timezone; -~~~ - -~~~ - timezone --------------------- - America/New_York -(1 row) -~~~ - -### Set default session variable values for a role in a specific database - -In the following example, the `root` user creates a role named `max` and a database named `movr`, and sets the default value of the `statement_timeout` [session variable](set-vars.html#supported-variables) for the `max` role in the `movr` database. - -~~~ sql -root@:26257/defaultdb> CREATE DATABASE movr; -~~~ - -~~~ sql -root@:26257/defaultdb> CREATE ROLE max WITH LOGIN; -~~~ - -~~~ sql -root@:26257/defaultdb> ALTER ROLE max IN DATABASE movr SET statement_timeout = '10s'; -~~~ - -This statement does not affect the default `statement_timeout` value for any role other than `max`, or in any database other than `movr`. - -~~~ sql -root@:26257/defaultdb> SHOW statement_timeout; -~~~ - -~~~ - statement_timeout ---------------------- - 0 -(1 row) -~~~ - -To see the new default `statement_timeout` value for the `max` role, run the `SHOW` statement as a member of the `max` role that has connected to the cluster, with the database `movr` specified in the connection string. - -~~~ shell -cockroach sql --url 'postgresql://max@localhost:26257/movr?sslmode=disable' -~~~ - -~~~ sql -max@:26257/movr> SHOW statement_timeout; -~~~ - -~~~ - statement_timeout ---------------------- - 10000 -(1 row) -~~~ - -### Set default session variable values for a specific database - -In the following example, the `root` user creates a database named `movr`, and sets the default value of the `timezone` [session variable](set-vars.html#supported-variables) for all roles in that database. - -~~~ sql -root@:26257/defaultdb> CREATE DATABASE movr; -~~~ - -~~~ sql -root@:26257/defaultdb> ALTER ROLE ALL IN DATABASE movr SET timezone = 'America/New_York'; -~~~ - -{{site.data.alerts.callout_info}} -This statement is identical to [`ALTER DATABASE movr SET timezone = 'America/New_York';`](alter-database.html). -{{site.data.alerts.end}} - -This statement does not affect the default `timezone` value for any database other than `movr`: - -~~~ sql -root@:26257/defaultdb> SHOW timezone; -~~~ - -~~~ - timezone ------------- - UTC -(1 row) -~~~ - -To see the default `timezone` value for the `max` role, run the `SHOW` statement as a member of the `max` role: - -~~~ sql -root@:26257/movr> SHOW timezone; -~~~ - -~~~ - timezone --------------------- - America/New_York -(1 row) -~~~ - -## See also - -- [`DROP ROLE`](drop-role.html) -- [`SHOW ROLES`](show-roles.html) -- [`GRANT`](grant.html) -- [`SHOW GRANTS`](show-grants.html) -- [`cockroach cert`](cockroach-cert.html) -- [SQL Statements](sql-statements.html) -- [Authorization Best Practices](security-reference/authorization.html#authorization-best-practices) - diff --git a/src/current/v22.1/alter-schema.md b/src/current/v22.1/alter-schema.md deleted file mode 100644 index 0db1a28c002..00000000000 --- a/src/current/v22.1/alter-schema.md +++ /dev/null @@ -1,204 +0,0 @@ ---- -title: ALTER SCHEMA -summary: The ALTER SCHEMA statement modifies a user-defined schema in a database. -toc: true -docs_area: reference.sql ---- - -The `ALTER SCHEMA` [statement](sql-statements.html) modifies a user-defined [schema](sql-name-resolution.html#naming-hierarchy). CockroachDB currently supports changing the name of the schema and the owner of the schema. - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Syntax - -
-{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_schema.html %} -
- -### Parameters - -Parameter | Description -----------|------------ -`name`
`name.name` | The name of the schema to alter, or the name of the database containing the schema and the schema name, separated by a "`.`". -`RENAME TO schema_name` | Rename the schema to `schema_name`. The new schema name must be unique within the database and follow these [identifier rules](keywords-and-identifiers.html#identifiers). -`OWNER TO role_spec` | Change the owner of the schema to `role_spec`. - -## Required privileges - -- To rename a schema, the user must be the owner of the schema. -- To change the owner of a schema, the user must be the current owner of the schema and a member of the new owner [role](security-reference/authorization.html#roles). The new owner role must also have the `CREATE` [privilege](security-reference/authorization.html#managing-privileges) on the database to which the schema belongs. - -## Example - -{% include {{page.version.version}}/sql/movr-statements.md %} - -### Rename a schema - -Suppose that you access the [SQL shell](cockroach-sql.html) as user `root`, and [create a new user](create-user.html) `max` and [a schema](create-schema.html) `org_one` with `max` as the owner: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE USER max; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE SCHEMA org_one AUTHORIZATION max; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW SCHEMAS; -~~~ - -~~~ - schema_name ----------------------- - crdb_internal - information_schema - org_one - pg_catalog - pg_extension - public -(6 rows) -~~~ - -Now, suppose you want to rename the schema: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER SCHEMA org_one RENAME TO org_two; -~~~ - -~~~ -ERROR: must be owner of schema "org_one" -SQLSTATE: 42501 -~~~ - -Because you are executing the `ALTER SCHEMA` command as a non-owner of the schema (i.e., `root`), CockroachDB returns an error. - -[Drop the schema](drop-schema.html) and create it again, this time with `root` as the owner. - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP SCHEMA org_one; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE SCHEMA org_one; -~~~ - -To verify that the owner is now `root`, query the `pg_catalog.pg_namespace` and `pg_catalog.pg_users` tables: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT - nspname, usename -FROM - pg_catalog.pg_namespace - LEFT JOIN pg_catalog.pg_user ON pg_namespace.nspowner = pg_user.usesysid -WHERE - nspname LIKE 'org_one'; -~~~ - -~~~ - nspname | usename -----------+---------- - org_one | root -(1 row) -~~~ - -As its owner, you can rename the schema: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER SCHEMA org_one RENAME TO org_two; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW SCHEMAS; -~~~ - -~~~ - schema_name ----------------------- - crdb_internal - information_schema - org_two - pg_catalog - pg_extension - public -(6 rows) -~~~ - -### Change a schema's owner - -Suppose that you access the [SQL shell](cockroach-sql.html) as user `root`, and [create a new schema](create-schema.html) named `org_one`: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE SCHEMA org_one; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW SCHEMAS; -~~~ - -~~~ - schema_name ----------------------- - crdb_internal - information_schema - org_one - pg_catalog - pg_extension - public -(6 rows) -~~~ - -Now, suppose that you want to change the owner of the schema `org_one` to an existing user named `max`. To change the owner of a schema, the current owner must belong to the role of the new owner (in this case, `max`), and the new owner must have `CREATE` privileges on the database. - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT max TO root; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT CREATE ON DATABASE defaultdb TO max; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER SCHEMA org_one OWNER TO max; -~~~ - -To verify that the owner is now `max`, query the `pg_catalog.pg_namespace` and `pg_catalog.pg_users` tables: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT - nspname, usename -FROM - pg_catalog.pg_namespace - LEFT JOIN pg_catalog.pg_user ON pg_namespace.nspowner = pg_user.usesysid -WHERE - nspname LIKE 'org_one'; -~~~ - -~~~ - nspname | usename -----------+---------- - org_one | max -(1 row) -~~~ - -## See also - -- [`CREATE SCHEMA`](create-schema.html) -- [`SHOW SCHEMAS`](show-schemas.html) -- [`DROP SCHEMA`](drop-schema.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v22.1/alter-sequence.md b/src/current/v22.1/alter-sequence.md deleted file mode 100644 index f05dc0317d1..00000000000 --- a/src/current/v22.1/alter-sequence.md +++ /dev/null @@ -1,259 +0,0 @@ ---- -title: ALTER SEQUENCE -summary: Use the ALTER SEQUENCE statement to change the name, increment values, and other settings of a sequence. -toc: true -docs_area: reference.sql ---- - -The `ALTER SEQUENCE` [statement](sql-statements.html) applies a [schema change](online-schema-changes.html) to a sequence. - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Required privileges - -- To alter a sequence, the user must have the `CREATE` [privilege](security-reference/authorization.html#managing-privileges) on the parent database. -- To change the schema of a sequence with `ALTER SEQUENCE ... SET SCHEMA`, or to change the database of a sequence with `ALTER SEQUENCE ... RENAME TO`, the user must also have the `DROP` [privilege](security-reference/authorization.html#managing-privileges) on the sequence. - -## Syntax - -
-{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_sequence.html %} -
- -## Parameters - - - - Parameter | Description ------------|------------ -`IF EXISTS` | Modify the sequence only if it exists; if it does not exist, do not return an error. -`sequence_name` | The name of the sequence. -`RENAME TO sequence_name` | Rename the sequence to `sequence_name`, which must be unique to its database and follow these [identifier rules](keywords-and-identifiers.html#identifiers). Name changes do not propagate to the table(s) using the sequence.

Note that `RENAME TO` can be used to move a sequence from one database to another, but it cannot be used to move a sequence from one schema to another. To change a sequence's schema, use `ALTER SEQUENCE ...SET SCHEMA` instead. In a future release, `RENAME TO` will be limited to changing the name of a sequence, and will not have to the ability to change a sequence's database. -`CYCLE`/`NO CYCLE` | The sequence will wrap around when the sequence value hits the maximum or minimum value. If `NO CYCLE` is set, the sequence will not wrap. -`OWNED BY column_name` | Associates the sequence to a particular column. If that column or its parent table is dropped, the sequence will also be dropped.

Specifying an owner column with `OWNED BY` replaces any existing owner column on the sequence. To remove existing column ownership on the sequence and make the column free-standing, specify `OWNED BY NONE`.

**Default:** `NONE` -`CACHE` | The number of sequence values to cache in memory for reuse in the session. A cache size of `1` means that there is no cache, and cache sizes of less than `1` are not valid.

**Default:** `1` (sequences are not cached by default) -`MINVALUE` | The new minimum value of the sequence.

**Default:** `1` -`MAXVALUE` | The new maximum value of the sequence.

**Default:** `9223372036854775807` -`INCREMENT` | The new value by which the sequence is incremented. A negative number creates a descending sequence. A positive number creates an ascending sequence. -`START` | The value the sequence starts at if you `RESTART` or if the sequence hits the `MAXVALUE` and `CYCLE` is set.

`RESTART` and `CYCLE` are not implemented yet. -`VIRTUAL` | Creates a *virtual sequence*.

Virtual sequences are sequences that do not generate monotonically increasing values and instead produce values like those generated by the built-in function [`unique_rowid()`](functions-and-operators.html). They are intended for use in combination with [`SERIAL`](serial.html)-typed columns. -`SET SCHEMA schema_name` | Change the schema of the sequence to `schema_name`. -`OWNER TO role_spec` | Change the owner of the sequence to `role_spec`. - -## Examples - -### Change the increment value of a sequence - -In this example, we're going to change the increment value of a sequence from its current state (i.e., `1`) to `2`. - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE SEQUENCE customer_seq; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE customer_seq; -~~~ - -~~~ - table_name | create_statement ----------------+------------------------------------------------------------------------------------------- - customer_seq | CREATE SEQUENCE customer_seq MINVALUE 1 MAXVALUE 9223372036854775807 INCREMENT 1 START 1 -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER SEQUENCE customer_seq INCREMENT 2; -~~~ - -~~~ - table_name | create_statement ----------------+-------------------------------------------------------------------------------------------------- - customer_seq | CREATE SEQUENCE public.customer_seq MINVALUE 1 MAXVALUE 9223372036854775807 INCREMENT 2 START 1 -(1 row) -~~~ - -### Rename a sequence - -In this example, we will change the name of sequence. - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE SEQUENCE even_numbers INCREMENT 2 START 2; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW SEQUENCES; -~~~ - -~~~ - sequence_schema | sequence_name -------------------+---------------- - public | even_numbers -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER SEQUENCE even_numbers RENAME TO even_sequence; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW SEQUENCES; -~~~ - -~~~ - sequence_schema | sequence_name -------------------+---------------- - public | even_sequence -(1 row) -~~~ - -### Change the database of a sequence - -In this example, we will move the sequence we renamed in the first example (`even_sequence`) from `defaultdb` (i.e., the default database) to a different database. - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW SEQUENCES FROM defaultdb; -~~~ - -~~~ - sequence_schema | sequence_name -------------------+---------------- - public | even_sequence -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE DATABASE mydb; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER SEQUENCE even_sequence RENAME TO mydb.even_sequence; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW SEQUENCES FROM defaultdb; -~~~ - -~~~ - sequence_schema | sequence_name -------------------+---------------- -(0 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW SEQUENCES FROM mydb; -~~~ - -~~~ - sequence_schema | sequence_name -------------------+---------------- - public | even_sequence -(1 row) -~~~ - -### Change the schema of a sequence - -Suppose you [create a sequence](create-sequence.html) that you would like to add to a new schema called `cockroach_labs`: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE SEQUENCE even_numbers INCREMENT 2 START 2; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW SEQUENCES; -~~~ - -~~~ - sequence_schema | sequence_name -------------------+---------------- - public | even_numbers -(1 row) -~~~ - -By default, [unqualified sequences](sql-name-resolution.html#lookup-with-unqualified-names) created in the database belong to the `public` schema: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE public.even_numbers; -~~~ - -~~~ - table_name | create_statement -----------------------+-------------------------------------------------------------------------------------------------- - public.even_numbers | CREATE SEQUENCE public.even_numbers MINVALUE 1 MAXVALUE 9223372036854775807 INCREMENT 2 START 2 -(1 row) -~~~ - -If the new schema does not already exist, [create it](create-schema.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE SCHEMA IF NOT EXISTS cockroach_labs; -~~~ - -Then, change the sequence's schema: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER SEQUENCE even_numbers SET SCHEMA cockroach_labs; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE public.even_numbers; -~~~ - -~~~ -ERROR: relation "public.even_numbers" does not exist -SQLSTATE: 42P01 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW SEQUENCES; -~~~ - -~~~ - sequence_schema | sequence_name -------------------+---------------- - cockroach_labs | even_numbers -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE cockroach_labs.even_numbers; -~~~ - -~~~ - table_name | create_statement -------------------------------+---------------------------------------------------------------------------------------------------------- - cockroach_labs.even_numbers | CREATE SEQUENCE cockroach_labs.even_numbers MINVALUE 1 MAXVALUE 9223372036854775807 INCREMENT 2 START 2 -(1 row) -~~~ - -## See also - -- [`CREATE SEQUENCE`](create-sequence.html) -- [`DROP SEQUENCE`](drop-sequence.html) -- [`SHOW SEQUENCES`](show-sequences.html) -- [Functions and Operators](functions-and-operators.html) -- [SQL Statements](sql-statements.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v22.1/alter-super-region.md b/src/current/v22.1/alter-super-region.md deleted file mode 100644 index 7428bf2df20..00000000000 --- a/src/current/v22.1/alter-super-region.md +++ /dev/null @@ -1,100 +0,0 @@ ---- -title: ALTER SUPER REGION -summary: The ALTER SUPER REGION statement alters an existing super region to include a different set of regions. -toc: true -docs_area: reference.sql ---- - -The `ALTER DATABASE .. ALTER SUPER REGION` [statement](sql-statements.html) alters an existing [super region](multiregion-overview.html#super-regions) of a [multi-region database](multiregion-overview.html). - -{% include enterprise-feature.md %} - -{{site.data.alerts.callout_info}} -`ALTER SUPER REGION` is a subcommand of [`ALTER DATABASE`](alter-database.html). -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -{% include feature-phases/preview.md %} -{{site.data.alerts.end}} - -## Synopsis - -
-{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_database_alter_super_region.html %} -
- -## Parameters - -| Parameter | Description | -|-----------------+----------------------------------------------------------------------------------------------------------------------| -| `database_name` | The database with the [super region](multiregion-overview.html#super-regions) you are altering. | -| `name` | The name of the [super region](multiregion-overview.html#super-regions) being altered. | -| `name_list` | The altered super region will consist of this set of [database regions](multiregion-overview.html#database-regions). | - -## Required privileges - -To alter a database's super region, the user must have one of the following: - -- Membership to the [`admin`](security-reference/authorization.html#admin-role) role for the cluster. -- Either [ownership](security-reference/authorization.html#object-ownership) or the [`CREATE` privilege](security-reference/authorization.html#supported-privileges) for the database. - -## Considerations - -{% include {{page.version.version}}/sql/super-region-considerations.md %} - -## Examples - -The examples in this section use the following setup. - -{% include {{page.version.version}}/sql/multiregion-example-setup.md %} - -#### Set up movr database regions - -{% include {{page.version.version}}/sql/multiregion-movr-add-regions.md %} - -#### Set up movr global tables - -{% include {{page.version.version}}/sql/multiregion-movr-global.md %} - -#### Set up movr regional tables - -{% include {{page.version.version}}/sql/multiregion-movr-regional-by-row.md %} - -### Enable super regions - -{% include {{page.version.version}}/sql/enable-super-regions.md %} - -### Alter a super region - -This example assumes you have already added a `"usa"` super region as shown in the example [Add a super region to a database](add-super-region.html#add-a-super-region-to-a-database). If you wanted to [drop the region](drop-region.html) `us-west1`, you would first need to remove it from the super region. - -To remove a region from a super region, use the `ALTER DATABASE ... ALTER SUPER REGION` statement and list only the regions that should remain in the super region: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER DATABASE movr ALTER SUPER REGION "usa" VALUES "us-east1"; -~~~ - -~~~ -ALTER DATABASE ALTER SUPER REGION -~~~ - -To add a region to a super region, alter the super region as shown above to be a list of regions that includes the existing and the new regions. - -### Allow user to modify a primary region that is part of a super region - -{% include {{page.version.version}}/sql/enable-super-region-primary-region-changes.md %} - -## See also - -- [Multi-Region Capabilities Overview](multiregion-overview.html) -- [Super regions](multiregion-overview.html#super-regions) -- [`SET PRIMARY REGION`](set-primary-region.html) -- [`SHOW SUPER REGIONS`](show-super-regions.html) -- [`DROP SUPER REGION`](drop-super-region.html) -- [`ADD SUPER REGION`](add-super-region.html) -- [`DROP REGION`](drop-region.html) -- [`SHOW REGIONS`](show-regions.html) -- [`ALTER TABLE`](alter-table.html) -- [`ALTER DATABASE`](alter-database.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v22.1/alter-table.md b/src/current/v22.1/alter-table.md deleted file mode 100644 index b7726bdc850..00000000000 --- a/src/current/v22.1/alter-table.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: ALTER TABLE -summary: Use the ALTER TABLE statement to change the schema of a table. -toc: true -docs_area: reference.sql ---- - -The `ALTER TABLE` [statement](sql-statements.html) changes the definition of a table. For information on using `ALTER TABLE`, see the pages for its [subcommands](#subcommands). - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Subcommands - -{{site.data.alerts.callout_success}} -Some subcommands can be used in combination in a single `ALTER TABLE` statement. For example, you can [atomically rename a column and add a new column with the old name of the existing column](rename-column.html#add-and-rename-columns-atomically). -{{site.data.alerts.end}} - -Subcommand | Description | Can combine with other subcommands? ------------|-------------|------------------------------------ -[`ADD COLUMN`](add-column.html) | Add columns to tables. | Yes -[`ADD CONSTRAINT`](add-constraint.html) | Add constraints to columns. | Yes -[`ALTER COLUMN`](alter-column.html) | Change an existing column. | Yes -[`ALTER PRIMARY KEY`](alter-primary-key.html) | Change the [primary key](primary-key.html) of a table. | Yes | No -[`DROP COLUMN`](drop-column.html) | Remove columns from tables. | Yes -[`DROP CONSTRAINT`](drop-constraint.html) | Remove constraints from columns. | Yes -[`EXPERIMENTAL_AUDIT`](experimental-audit.html) | Enable per-table audit logs, for security purposes. | Yes -[`OWNER TO`](owner-to.html) | Change the owner of the table. -[`PARTITION BY`](partition-by.html) | Partition, re-partition, or un-partition a table ([Enterprise-only](enterprise-licensing.html)). | Yes -[`RENAME COLUMN`](rename-column.html) | Change the names of columns. | Yes -[`RENAME CONSTRAINT`](rename-constraint.html) | Change constraints columns. | Yes -[`RENAME TO`](rename-table.html) | Change the names of tables. | No -[`RESET (storage parameter)`](reset-storage-parameter.html) | Reset a storage parameter on a table to its default value. | Yes -[`SET SCHEMA`](set-schema.html) | Change the [schema](sql-name-resolution.html) of a table. | No -[`SPLIT AT`](split-at.html) | Force a [range split](architecture/distribution-layer.html#range-splits) at the specified row in the table. | No -[`UNSPLIT AT`](unsplit-at.html) | Remove a range split enforcement in the table. | No -[`VALIDATE CONSTRAINT`](validate-constraint.html) | Check whether values in a column match a [constraint](constraints.html) on the column. | Yes -[`SET LOCALITY {REGIONAL BY TABLE, REGIONAL BY ROW, GLOBAL}`](set-locality.html) | Set the table locality for a table in a [multi-region database](multiregion-overview.html). | No -[`SET (storage parameter)`](set-storage-parameter.html) | Set a storage parameter on a table. | Yes - -## View schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} diff --git a/src/current/v22.1/alter-type.md b/src/current/v22.1/alter-type.md deleted file mode 100644 index 25d67b008a7..00000000000 --- a/src/current/v22.1/alter-type.md +++ /dev/null @@ -1,158 +0,0 @@ ---- -title: ALTER TYPE -summary: The ALTER TYPE statement modifies a user-defined data type in a database. -toc: true -docs_area: reference.sql ---- - -The `ALTER TYPE` [statement](sql-statements.html) modifies a user-defined, [enumerated data type](enum.html) in the current database. - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -{{site.data.alerts.callout_info}} -You can only [cancel](cancel-job.html) `ALTER TYPE` [schema change jobs](online-schema-changes.html) that drop values. This is because when you drop a value, CockroachDB searches through every row that could contain the `ENUM` value, which could take a long time. - -All other `ALTER TYPE` schema change jobs are non-cancellable. -{{site.data.alerts.end}} - -## Synopsis - -
-{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_type.html %} -
- -## Parameters - -Parameter | Description -----------|------------ -`type_name` | The name of the user-defined type. -`ADD VALUE value` | Add a constant value to the user-defined type's list of values. You can optionally specify `BEFORE value` or `AFTER value` to add the value in sort order relative to an existing value. -`DROP VALUE value` | Drop a specific value from the user-defined type's list of values. -`RENAME TO name` | Rename the user-defined type. -`RENAME VALUE value TO value` | Rename a constant value in the user-defined type's list of values. -`SET SCHEMA` | Set [the schema](sql-name-resolution.html) of the user-defined type. -`OWNER TO` | Change the [role specification](grant.html) for the user-defined type's owner. - -## Required privileges - -- To [alter a type](alter-type.html), the user must be the owner of the type. -- To set the schema of a user-defined type, the user must have the `CREATE` [privilege](security-reference/authorization.html#managing-privileges) on the schema and the `DROP` privilege -on the type. -- To alter the owner of a user-defined type: - - The user executing the command must be a member of the new owner role. - - The new owner role must have the `CREATE` privilege on the schema the type belongs to. - -## Known limitations - -- You can only reference a user-defined type from the database that contains the type. - -## Example - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TYPE status AS ENUM ('open', 'closed', 'inactive'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW ENUMS; -~~~ - -~~~ - schema | name | values | owner ----------+--------+------------------------+-------- - public | status | {open,closed,inactive} | demo -(1 row) -~~~ - -### Add a value to a user-defined type - -To add a value to the `status` type, use an `ADD VALUE` clause: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TYPE status ADD VALUE 'pending'; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW ENUMS; -~~~ - -~~~ - schema | name | values | owner ----------+--------+--------------------------------+-------- - public | status | {open,closed,inactive,pending} | demo -(1 row) -~~~ - -### Rename a value in a user-defined type - -To rename a value in the `status` type, use a `RENAME VALUE` clause: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TYPE status RENAME VALUE 'open' TO 'active'; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW ENUMS; -~~~ - -~~~ - schema | name | values | owner ----------+--------+----------------------------------+-------- - public | status | {active,closed,inactive,pending} | demo -(1 row) -~~~ - -### Rename a user-defined type - -To rename the `status` type, use a `RENAME TO` clause: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TYPE status RENAME TO account_status; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW ENUMS; -~~~ - -~~~ - schema | name | values | owner ----------+----------------+----------------------------------+-------- - public | account_status | {active,closed,inactive,pending} | demo -(1 row) -~~~ - -### Drop a value in a user-defined type - -To drop a value from the `account_status` type, use a `DROP VALUE` clause: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TYPE account_status DROP VALUE 'inactive'; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW ENUMS; -~~~ - -~~~ - schema | name | values | owner ----------+----------------+-------------------------+-------- - public | account_status | {active,closed,pending} | demo -(1 row) -~~~ - -## See also - -- [`CREATE TYPE`](create-type.html) -- [`ENUM`](enum.html) -- [`SHOW ENUMS`](show-enums.html) -- [`DROP TYPE`](drop-type.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v22.1/alter-user.md b/src/current/v22.1/alter-user.md deleted file mode 100644 index 7fd423fcb87..00000000000 --- a/src/current/v22.1/alter-user.md +++ /dev/null @@ -1,126 +0,0 @@ ---- -title: ALTER USER -summary: The ALTER USER statement can be used to add or change a user's password. -toc: true -docs_area: reference.sql ---- - -The `ALTER USER` [statement](sql-statements.html) can be used to add, change, or remove a [user's](create-user.html) password and to change the role options for a user. - -{{site.data.alerts.callout_info}} - Since the keywords `ROLE` and `USER` can now be used interchangeably in SQL statements for enhanced PostgreSQL compatibility, `ALTER USER` is now an alias for [`ALTER ROLE`](alter-role.html). -{{site.data.alerts.end}} - -## Considerations - -- Password creation and alteration is supported only in secure clusters. - -## Required privileges - - To alter other users, the user must be a member of the `admin` role or have the [`CREATEROLE`](create-user.html#create-a-user-that-can-create-other-users-and-manage-authentication-methods-for-the-new-users) parameter set. - -## Synopsis - -
{% include {{ page.version.version }}/sql/generated/diagrams/alter_user_password.html %}
- -## Parameters - - - -Parameter | Description -----------|------------- -`name` | The name of the user whose role options you want to alter. -`CANCELQUERY`/`NOCANCELQUERY` | Allow or disallow the user to cancel [queries](cancel-query.html) and [sessions](cancel-session.html) of other users. Without this privilege, users can only cancel their own queries and sessions. Even with the `CANCELQUERY` role option, non-`admin` users cannot cancel `admin` queries or sessions. This option should usually be combined with `VIEWACTIVITY` so that the user can view other users' query and session information.

By default, the role option is set to `NOCANCELQUERY` for all non-`admin` users. -`CONTROLCHANGEFEED`/`NOCONTROLCHANGEFEED` | Allow or disallow the user to run [`CREATE CHANGEFEED`](create-changefeed.html) on tables they have `SELECT` privileges on.

By default, the role option is set to `NOCONTROLCHANGEFEED` for all non-`admin` users. -`CONTROLJOB`/`NOCONTROLJOB` | Allow or disallow the user to [pause](pause-job.html), [resume](resume-job.html), and [cancel](cancel-job.html) jobs. Non-`admin` users cannot control jobs created by `admin` users.

By default, the role option is set to `NOCONTROLJOB` for all non-`admin` users. -`CREATEDB`/`NOCREATEDB` | Allow or disallow the user to [create](create-database.html) or [rename](rename-database.html) a database. The user is assigned as the owner of the database.

By default, the role option is set to `NOCREATEDB` for all non-`admin` users. -`CREATELOGIN`/`NOCREATELOGIN` | Allow or disallow the user to manage authentication using the `WITH PASSWORD`, `VALID UNTIL`, and `LOGIN/NOLOGIN` parameters.

By default, the role option is set to `NOCREATELOGIN` for all non-`admin` users. -`CREATEROLE`/`NOCREATEROLE` | Allow or disallow the user to [create](create-user.html), alter, and [drop](drop-user.html) other non-`admin` users.

By default, the role option is set to `NOCREATEROLE` for all non-`admin` users. -`LOGIN`/`NOLOGIN` | The `LOGIN` parameter allows a user to login with one of the [client authentication methods](authentication.html#client-authentication). Setting the parameter to `NOLOGIN` prevents the user from logging in using any authentication method. -`MODIFYCLUSTERSETTING`/`NOMODIFYCLUSTERSETTING` | Allow or disallow the user to modify the [cluster settings](cluster-settings.html) with the `sql.defaults` prefix.

By default, the role option is set to `NOMODIFYCLUSTERSETTING` for all non-`admin` users. -`password` | Let the user [authenticate their access to a secure cluster](authentication.html#client-authentication) using this new password. Passwords should be entered as a [string literal](sql-constants.html#string-literals). For compatibility with PostgreSQL, a password can also be entered as an identifier.

To prevent a user from using [password authentication](authentication.html#client-authentication) and to mandate [certificate-based client authentication](authentication.html#client-authentication), [set the password as `NULL`](#prevent-a-user-from-using-password-authentication). -`VALID UNTIL` | The date and time (in the [`timestamp`](timestamp.html) format) after which the password is not valid. -`VIEWACTIVITY`/`NOVIEWACTIVITY` | Allow or disallow a user to see other users' [queries](show-statements.html) and [sessions](show-sessions.html) using `SHOW STATEMENTS`, `SHOW SESSIONS`, and the [**Statements**](ui-statements-page.html) and [**Transactions**](ui-transactions-page.html) pages in the DB Console. `VIEWACTIVITY` also permits visibility of node hostnames and IP addresses in the DB Console. With `NOVIEWACTIVITY`, the `SHOW` commands show only the user's own data, and DB Console pages redact node hostnames and IP addresses.

By default, the role option is set to `NOVIEWACTIVITY` for all non-`admin` users. -`VIEWCLUSTERSETTING` / `NOVIEWCLUSTERSETTING` | Allow or disallow the user to view the [cluster settings](cluster-settings.html) with `SHOW CLUSTER SETTING`, and to access the [**Cluster Settings**](ui-debug-pages.html) page in the DB Console.

By default, the role option is set to `NOVIEWCLUSTERSETTING` for all non-`admin` users. - -## Examples - -{{site.data.alerts.callout_info}} -The following statements are run by the `root` user that is a member of the `admin` role and has `ALL` privileges. -{{site.data.alerts.end}} - -### Change a user's password - -~~~ sql -root@:26257/defaultdb> ALTER USER carl WITH PASSWORD 'An0ther$tr0nGpassW0rD' VALID UNTIL '2021-10-10'; -~~~ - -### Prevent a user from using password authentication - -The following statement prevents the user from using password authentication and mandates certificate-based [client authentication](authentication.html#client-authentication): - -{% include_cached copy-clipboard.html %} -~~~ sql -root@:26257/defaultdb> ALTER USER carl WITH PASSWORD NULL; -~~~ - -### Allow a user to create other users and manage authentication methods for the new users - -The following example allows the user to [create other users](create-user.html) and [manage authentication methods](authentication.html#client-authentication) for them: - -~~~ sql -root@:26257/defaultdb> ALTER USER carl WITH CREATEROLE CREATELOGIN; -~~~ - -### Allow a user to create and rename databases - -The following example allows the user to [create](create-database.html) or [rename](rename-database.html) databases: - -~~~ sql -root@:26257/defaultdb> ALTER USER carl WITH CREATEDB; -~~~ - -### Allow a user to pause, resume, and cancel non-admin jobs - -The following example allows the user to [pause](pause-job.html), [resume](resume-job.html), and [cancel](cancel-job.html) jobs: - -~~~ sql -root@:26257/defaultdb> ALTER USER carl WITH CONTROLJOB; -~~~ - -### Allow a user to see and cancel non-admin queries and sessions - -The following example allows the user to cancel [queries](cancel-query.html) and [sessions](cancel-session.html) for other non-`admin` roles: - -~~~ sql -root@:26257/defaultdb> ALTER USER carl WITH CANCELQUERY VIEWACTIVITY; -~~~ - -### Allow a user to control changefeeds - -The following example allows the user to run [`CREATE CHANGEFEED`](create-changefeed.html): - -~~~ sql -root@:26257/defaultdb> ALTER USER carl WITH CONTROLCHANGEFEED; -~~~ - -### Allow a user to modify cluster settings - -The following example allows the user to modify [cluster settings](cluster-settings.html): - -~~~ sql -root@:26257/defaultdb> ALTER USER carl WITH MODIFYCLUSTERSETTING; -~~~ - -## See also - -- [`DROP USER`](drop-user.html) -- [`SHOW USERS`](show-users.html) -- [`GRANT`](grant.html) -- [`SHOW GRANTS`](show-grants.html) -- [Create Security Certificates](cockroach-cert.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v22.1/alter-view.md b/src/current/v22.1/alter-view.md deleted file mode 100644 index 839b7b5bc4f..00000000000 --- a/src/current/v22.1/alter-view.md +++ /dev/null @@ -1,144 +0,0 @@ ---- -title: ALTER VIEW -summary: The ALTER VIEW statement applies a schema change to a view. -toc: true -docs_area: reference.sql ---- - -The `ALTER VIEW` [statement](sql-statements.html) applies a schema change to a [view](views.html). - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Required privileges - -- To alter a view, the user must have the `CREATE` [privilege](security-reference/authorization.html#managing-privileges) on the parent database. -- To change the schema of a view with `ALTER VIEW ... SET SCHEMA`, or to change the name of a view with `ALTER VIEW ... RENAME TO`, the user must also have the `DROP` [privilege](security-reference/authorization.html#managing-privileges) on the view. - -## Syntax - -
-{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_view.html %} -
- -## Parameters - -Parameter | Description -----------|------------ -`MATERIALIZED` | Rename a [materialized view](views.html#materialized-views). -`IF EXISTS` | Rename the view only if a view of `view_name` exists; if one does not exist, do not return an error. -`view_name` | The name of the view to rename. To find view names, use:

`SELECT * FROM information_schema.tables WHERE table_type = 'VIEW';` -`RENAME TO view_name` | Rename the view to `view_name`, which must be unique to its database and follow these [identifier rules](keywords-and-identifiers.html#identifiers). Name changes do not propagate to the table(s) using the view.

Note that `RENAME TO` can be used to move a view from one database to another, but it cannot be used to move a view from one schema to another. To change a view's schema, use `ALTER VIEW ...SET SCHEMA` instead. In a future release, `RENAME TO` will be limited to changing the name of a view, and will not have the ability to change a view's database. -`SET SCHEMA schema_name` | Change the schema of the view to `schema_name`. -`OWNER TO role_spec` | Change the owner of the view to `role_spec`. - -## Limitations - -CockroachDB does not currently support: - -- Changing the [`SELECT`](select-clause.html) statement executed by a view. Instead, you must drop the existing view and create a new view. -- Renaming a view that other views depend on. This feature may be added in the future (see [tracking issue](https://github.com/cockroachdb/cockroach/issues/10083)). - -## Examples - -### Rename a view - -Suppose you create a new view that you want to rename: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE VIEW money_rides (id, revenue) AS SELECT id, revenue FROM rides WHERE revenue > 50; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> WITH x AS (SHOW TABLES) SELECT * FROM x WHERE type = 'view'; -~~~ - -~~~ - schema_name | table_name | type | owner | estimated_row_count | locality ---------------+-------------+------+-------+---------------------+----------- - public | money_rides | view | demo | 0 | NULL -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER VIEW money_rides RENAME TO expensive_rides; -~~~ -~~~ -RENAME VIEW -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> WITH x AS (SHOW TABLES) SELECT * FROM x WHERE type = 'view'; -~~~ - -~~~ - schema_name | table_name | type | owner | estimated_row_count | locality ---------------+-----------------+------+-------+---------------------+----------- - public | expensive_rides | view | demo | 0 | NULL -(1 row) -~~~ - -### Change the schema of a view - -Suppose you want to add the `expensive_rides` view to a schema called `cockroach_labs`: - -By default, [unqualified views](sql-name-resolution.html#lookup-with-unqualified-names) created in the database belong to the `public` schema: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE public.expensive_rides; -~~~ - -~~~ - table_name | create_statement --------------------------+------------------------------------------------------------------------------------------------------------------- - public.expensive_rides | CREATE VIEW public.expensive_rides (id, revenue) AS SELECT id, revenue FROM movr.public.rides WHERE revenue > 50 -(1 row) -~~~ - -If the new schema does not already exist, [create it](create-schema.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE SCHEMA IF NOT EXISTS cockroach_labs; -~~~ - -Then, change the view's schema: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER VIEW expensive_rides SET SCHEMA cockroach_labs; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE public.expensive_rides; -~~~ - -~~~ -ERROR: relation "public.expensive_rides" does not exist -SQLSTATE: 42P01 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE cockroach_labs.expensive_rides; -~~~ - -~~~ - table_name | create_statement ----------------------------------+--------------------------------------------------------------------------------------------------------------------------- - cockroach_labs.expensive_rides | CREATE VIEW cockroach_labs.expensive_rides (id, revenue) AS SELECT id, revenue FROM movr.public.rides WHERE revenue > 50 -(1 row) -~~~ - -## See also - -- [Views](views.html) -- [`CREATE VIEW`](create-view.html) -- [`SHOW CREATE`](show-create.html) -- [`DROP VIEW`](drop-view.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v22.1/api-support-policy.md b/src/current/v22.1/api-support-policy.md deleted file mode 100644 index e8abae4d80f..00000000000 --- a/src/current/v22.1/api-support-policy.md +++ /dev/null @@ -1,93 +0,0 @@ ---- -title: API Support Policy -summary: Learn about Cockroach Labs's policy for supporting CockroachDB APIs. -toc: true -docs_area: reference ---- - -Cockroach Labs exposes various application programming interfaces (APIs). - -The vast majority of changes to these interfaces are seamless additions of new functionality. However, some changes are backward-incompatible and may require you to adjust your integration. Changes to an API are introduced according to its support policy. - -This page includes the following information: - -- Our API [support policies](#support-policies). -- Our definitions of [backward-incompatible](#backward-incompatible-changes) and [backward-compatible](#backward-compatible-changes) changes. -- A summary of [APIs](#apis) that CockroachDB makes available. - -## Support policies - -| Type | Description | Guarantees | -|----------|---------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Stable | Supported for interfacing with third-party automated tools. | [Backward-incompatible changes](#backward-incompatible-changes) may be introduced in new major versions.
[Backward-compatible changes](#backward-compatible-changes) may be introduced in new patch versions. | -| Unstable | Supported for consumption by humans. Not supported for automation. | [Backward-incompatible changes](#backward-incompatible-changes) may be introduced in new major and patch versions. | -| Reserved | Intended for use by CockroachDB developers. Not supported for public use. | N/A | - -Backward-incompatible changes to **stable APIs** are highlighted in the [release notes](../releases/index.html#production-releases) for major CockroachDB versions. Users are asked to [consider backward-incompatible changes before upgrading](upgrade-cockroach-version.html#review-breaking-changes) to a new CockroachDB version. - -### Backward-incompatible changes - -A change is *backward-incompatible* when existing automation requires an update in order to continue working. These changes are also known as "breaking changes": - -- Removal or renaming of an endpoint, [built-in function](functions-and-operators.html#built-in-functions), [cluster setting](cluster-settings.html), or session variable. -- Removal or renaming of a SQL statement or syntax. -- Addition, removal, or renaming of a mandatory command-line flag or HTTP field. -- Removal or renaming of an optional command-line flag or HTTP field. -- Change in behavior of a [built-in function](functions-and-operators.html#built-in-functions) without fixing a bug or PostgreSQL incompatibility. -- Removal or renaming of possible values in an `ENUM` session variable or [cluster setting](cluster-settings.html). -- Change in non-interactive [`cockroach sql`](cockroach-sql.html) shell input or output. -- Change in behavior of a [structured log event](eventlog.html) type, including the [logging channel](logging-overview.html#logging-channels) it is emitted on. -- Renaming of a [structured log event](eventlog.html) type or payload field. - -### Backward-compatible changes - -A change is *backward-compatible* when existing automation continues to work without updates. - -The following list is not exhaustive: - -- Addition of an optional command-line flag or HTTP field. -- Removal or change of any functionality documented as Preview or otherwise not fully supported. -- Marking functionality as deprecated via in-line documentation, hints, or warnings without removing it altogether. -- Addition or removal of a metric. -- Addition of a structured log event type or payload field. -- Addition of a new [logging channel](logging-overview.html#logging-channels). - -### Versioning - -A stable API may be assigned a new version number in two situations: - -- When changes are introduced to the API. -- When a new CockroachDB version is released. - -## APIs - -{{site.data.alerts.callout_info}} -A *mixed* API includes both stable and unstable features. -{{site.data.alerts.end}} - -| Interface | Policy | Versioning | Notes | Availability | -|-------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------| -| [PostgreSQL wire protocol](postgresql-compatibility.html) | Stable | Versioned concurrently with CockroachDB. | Compatible with PostgreSQL version 13. | All products | -| [SQL syntax](sql-feature-support.html) | Mixed | Versioned concurrently with CockroachDB. | Best-effort policy to add and not remove SQL syntax. All `SHOW` statements are unstable, as described in the following row. | All products | -| `SHOW` SQL statements | Unstable | Versioned concurrently with CockroachDB. | This includes all documented SQL `SHOW` statements, which display unstable output. | All products | -| [`information_schema` system catalog](information-schema.html) | Stable | Versioned concurrently with CockroachDB. | | All products | -| [`pg_catalog` system catalog](pg-catalog.html) | Stable | Versioned concurrently with CockroachDB. | | All products | -| [`pg_extension` system catalog](pg-extension.html) | Stable | Versioned concurrently with CockroachDB. | | All products | -| [`crdb_internal` system catalog](crdb-internal.html) | Reserved | Versioned concurrently with CockroachDB. | A [subset of the `crdb_internal` system catalog](crdb-internal.html#tables) is stable. | All products | -| [Built-in functions](functions-and-operators.html#built-in-functions) | Mixed | Versioned concurrently with CockroachDB. | Any built-in functions prefixed with `crdb_internal` are reserved. | All products | -| [`cockroach` commands](cockroach-commands.html) | Mixed | Versioned concurrently with CockroachDB. | Stability considerations for `cockroach sql` are described in the following row. | All products | -| [`cockroach sql` shell](cockroach-sql.html) | Mixed | Versioned concurrently with CockroachDB. | When used non-interactively, `cockroach sql` is stable unless your usage relies on unstable input or output. Any `cockroach sql` output prefixed by `#` is unstable. When used interactively, `cockroach sql` is unstable. | All products | -| [Health endpoints](monitoring-and-alerting.html#health-endpoints) | Stable | No new versions forthcoming. | | All products | -| [Prometheus endpoint](monitoring-and-alerting.html#prometheus-endpoint) | Stable | No new versions forthcoming. | Although this endpoint is not versioned, individual metrics may be added or removed in each CockroachDB release. No changes are expected to response format. | CockroachDB {{ site.data.products.dedicated }}, CockroachDB {{ site.data.products.core }} | -| [Cluster API](cluster-api.html) | Mixed | [Versioned independently from CockroachDB.](cluster-api.html#versioning-and-stability) | For information on supported endpoints, see [Cluster API](cluster-api.html). | CockroachDB {{ site.data.products.dedicated }}, CockroachDB {{ site.data.products.core }} | -| [DB Console](ui-overview.html) | Unstable | N/A | For stable access to the information present in this tool, use the [Cluster API](cluster-api.html). | CockroachDB {{ site.data.products.dedicated }}, CockroachDB {{ site.data.products.core }} | -| [Logging](logging-overview.html) | Mixed | Versioned concurrently with CockroachDB. | Stability varies by [event type](eventlog.html). Structured events are stable and unstructured events are unstable. | CockroachDB {{ site.data.products.dedicated }}, CockroachDB {{ site.data.products.core }} | -| [`ccloud` CLI](../cockroachcloud/ccloud-get-started.html) | Mixed | Versioned independently from CockroachDB. | Default output is unstable. Specify the `–json` argument in the CLI for stable output that follows the versioning scheme. | CockroachDB {{ site.data.products.cloud }} | -| [CockroachDB {{ site.data.products.cloud }} API](../cockroachcloud/cloud-api.html) | Stable | Versioned independently from CockroachDB. | | CockroachDB {{ site.data.products.cloud }} | -| CockroachDB {{ site.data.products.cloud }} Console | Unstable | N/A | | CockroachDB {{ site.data.products.cloud }} | -| [Advanced Debug endpoints](ui-debug-pages.html) | Reserved | N/A | | N/A | - -## See also - -- [Release Support Policy](../releases/release-support-policy.html) -- [Monitoring and Alerting](monitoring-and-alerting.html) \ No newline at end of file diff --git a/src/current/v22.1/apply-statement-performance-rules.md b/src/current/v22.1/apply-statement-performance-rules.md deleted file mode 100644 index 7b478a44d3a..00000000000 --- a/src/current/v22.1/apply-statement-performance-rules.md +++ /dev/null @@ -1,433 +0,0 @@ ---- -title: Apply SQL Statement Performance Rules -summary: How to apply SQL statement performance rules to optimize a query. -toc: true -docs_area: develop ---- - -This tutorial shows how to apply [SQL statement performance rules](make-queries-fast.html#sql-statement-performance-rules) to optimize a query against the [`movr` example dataset](cockroach-demo.html#datasets). - -## Before you begin - -{% include {{ page.version.version }}/demo_movr.md %} - -It's common to offer users promo codes to increase usage and customer loyalty. In this scenario, you want to find the 10 users who have taken the highest number of rides on a given date, and offer them promo codes that provide a 10% discount. To phrase it in the form of a question: "Who are the top 10 users by number of rides on a given date?" - -## Rule 1. Scan as few rows as possible - -First, study the schema so you understand the relationships between the tables. Run [`SHOW TABLES`](show-tables.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -SHOW TABLES; -~~~ - -~~~ - schema_name | table_name | type | estimated_row_count ---------------+----------------------------+-------+---------------------- - public | promo_codes | table | 250000 - public | rides | table | 125000 - public | user_promo_codes | table | 0 - public | users | table | 12500 - public | vehicle_location_histories | table | 250000 - public | vehicles | table | 3750 -(6 rows) - -Time: 17ms total (execution 17ms / network 0ms) -~~~ - -Look at the schema for the `users` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -SHOW CREATE TABLE users; -~~~ - -~~~ - table_name | create_statement --------------+-------------------------------------------------------------- - users | CREATE TABLE public.users ( - | id UUID NOT NULL, - | city VARCHAR NOT NULL, - | name VARCHAR NULL, - | address VARCHAR NULL, - | credit_card VARCHAR NULL, - | CONSTRAINT users_pkey PRIMARY KEY (city ASC, id ASC) - | ) -(1 row) - -Time: 9ms total (execution 9ms / network 0ms) -~~~ - -There's no information about the number of rides taken here, nor anything about the days on which rides occurred. Luckily, there is also a `rides` table. Let's look at it: - -{% include_cached copy-clipboard.html %} -~~~ sql -SHOW CREATE TABLE rides; -~~~ - -~~~ - table_name | create_statement --------------+---------------------------------------------------------------------------------------------------------------------------------- - rides | CREATE TABLE public.rides ( - | id UUID NOT NULL, - | city VARCHAR NOT NULL, - | vehicle_city VARCHAR NULL, - | rider_id UUID NULL, - | vehicle_id UUID NULL, - | start_address VARCHAR NULL, - | end_address VARCHAR NULL, - | start_time TIMESTAMP NULL, - | end_time TIMESTAMP NULL, - | revenue DECIMAL(10,2) NULL, - | CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - | CONSTRAINT fk_city_ref_users FOREIGN KEY (city, rider_id) REFERENCES public.users(city, id), - | CONSTRAINT fk_vehicle_city_ref_vehicles FOREIGN KEY (vehicle_city, vehicle_id) REFERENCES public.vehicles(city, id), - | INDEX rides_auto_index_fk_city_ref_users (city ASC, rider_id ASC), - | INDEX rides_auto_index_fk_vehicle_city_ref_vehicles (vehicle_city ASC, vehicle_id ASC), - | FAMILY "primary" (id, city, vehicle_city, rider_id, vehicle_id, start_address, end_address, start_time, end_time, revenue), - | CONSTRAINT check_vehicle_city_city CHECK (vehicle_city = city) - | ) -(1 row) - -Time: 9ms total (execution 8ms / network 1ms) -~~~ - -There is a `rider_id` field that you can use to match each ride to a user. There is also a `start_time` field that you can use to filter the rides by date. - -This means that to get the information you want, you'll need to do a [join](joins.html) on the `users` and `rides` tables. - -Next, get the row counts for the tables that you'll be using in this query. You need to understand which tables are large, and which are small by comparison. You will need this later if you need to verify you are [using the right join type](#rule-3-use-the-right-join-type). - -As specified by your [`cockroach demo`](cockroach-demo.html) command, the `users` table has 12,500 records, and the `rides` table has 125,000 records. Because it's so large, you want to avoid scanning the entire `rides` table in your query. In this case, you can avoid scanning `rides` using an index, as shown in the next section. - -## Rule 2. Use the right index - -Here is a query that fetches the right answer to your question: "Who are the top 10 users by number of rides on a given date?" - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT - name, count(rides.id) AS sum -FROM - users JOIN rides ON users.id = rides.rider_id -WHERE - rides.start_time BETWEEN '2018-12-31 00:00:00' AND '2020-01-01 00:00:00' -GROUP BY - name -ORDER BY - sum DESC -LIMIT - 10; -~~~ - -~~~ - name | sum --------------------+------ - William Brown | 14 - William Mitchell | 10 - Joseph Smith | 10 - Paul Nelson | 9 - Christina Smith | 9 - Jeffrey Walker | 8 - Jennifer Johnson | 8 - Joseph Jones | 7 - Thomas Smith | 7 - James Williams | 7 -(10 rows) - -Time: 111ms total (execution 111ms / network 0ms) -~~~ - -Unfortunately, this query is a bit slow. 111 milliseconds puts you [over the limit where a user feels the system is reacting instantaneously](https://www.nngroup.com/articles/response-times-3-important-limits/), and you're still down in the database layer. This data still needs to be sent back to your application and displayed. - -You can see why if you look at the output of [`EXPLAIN`](explain.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPLAIN SELECT - name, count(rides.id) AS sum -FROM - users JOIN rides ON users.id = rides.rider_id -WHERE - rides.start_time BETWEEN '2018-12-31 00:00:00' AND '2020-01-01 00:00:00' -GROUP BY - name -ORDER BY - sum DESC -LIMIT - 10; -~~~ - -~~~ - info -------------------------------------------------------------------------------------------------------------- - distribution: full - vectorized: true - - • limit - │ estimated row count: 10 - │ count: 10 - │ - └── • sort - │ estimated row count: 7,772 - │ order: -count_rows - │ - └── • group - │ estimated row count: 7,772 - │ group by: name - │ - └── • hash join - │ estimated row count: 12,863 - │ equality: (rider_id) = (id) - │ - ├── • filter - │ │ estimated row count: 12,863 - │ │ filter: (start_time >= '2018-12-31 00:00:00') AND (start_time <= '2020-01-01 00:00:00') - │ │ - │ └── • scan - │ estimated row count: 125,000 (100% of the table; stats collected 54 seconds ago) - │ table: rides@rides_pkey - │ spans: FULL SCAN - │ - └── • scan - estimated row count: 12,500 (100% of the table; stats collected 2 minutes ago) - table: users@users_pkey - spans: FULL SCAN -(32 rows) - - -Time: 2ms total (execution 2ms / network 0ms) -~~~ - -The main problem is that you are doing full table scans on both the `users` and `rides` tables (see `spans: FULL SCAN`). This tells you that you do not have indexes on the columns in your `WHERE` clause, which is [an indexing best practice](indexes.html#best-practices). - -Therefore, you need to create an index on the column in your `WHERE` clause, in this case: `rides.start_time`. - -It's also possible that there is not an index on the `rider_id` column that you are doing a join against, which will also hurt performance. - -Before creating any more indexes, let's see what indexes already exist on the `rides` table by running [`SHOW INDEXES`](show-index.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -SHOW INDEXES FROM rides; -~~~ - -~~~ - table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit --------------+-----------------------------------------------+------------+--------------+--------------+-----------+---------+----------- - rides | rides_pkey | false | 1 | city | ASC | false | false - rides | rides_pkey | false | 2 | id | ASC | false | false - rides | rides_auto_index_fk_city_ref_users | true | 1 | city | ASC | false | false - rides | rides_auto_index_fk_city_ref_users | true | 2 | rider_id | ASC | false | false - rides | rides_auto_index_fk_city_ref_users | true | 3 | id | ASC | false | true - rides | rides_auto_index_fk_vehicle_city_ref_vehicles | true | 1 | vehicle_city | ASC | false | false - rides | rides_auto_index_fk_vehicle_city_ref_vehicles | true | 2 | vehicle_id | ASC | false | false - rides | rides_auto_index_fk_vehicle_city_ref_vehicles | true | 3 | id | ASC | false | true - rides | rides_auto_index_fk_vehicle_city_ref_vehicles | true | 4 | city | ASC | false | true -(9 rows) - -Time: 5ms total (execution 5ms / network 0ms) -~~~ - -As suspected, there are no indexes on `start_time` or `rider_id`, so you'll need to create indexes on those columns. - -Because another performance best practice is to [create an index on the `WHERE` condition storing the join key](sql-tuning-with-explain.html#solution-create-a-secondary-index-on-the-where-condition-storing-the-join-key), create an index on `start_time` that stores the join key `rider_id`: - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE INDEX ON rides (start_time) storing (rider_id); -~~~ - -Now that you have an index on the column in your `WHERE` clause that stores the join key, let's run the query again: - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT - name, count(rides.id) AS sum -FROM - users JOIN rides ON users.id = rides.rider_id -WHERE - rides.start_time BETWEEN '2018-12-31 00:00:00' AND '2020-01-01 00:00:00' -GROUP BY - name -ORDER BY - sum DESC -LIMIT - 10; -~~~ - -~~~ - name | sum --------------------+------ - William Brown | 14 - William Mitchell | 10 - Joseph Smith | 10 - Paul Nelson | 9 - Christina Smith | 9 - Jeffrey Walker | 8 - Jennifer Johnson | 8 - Joseph Jones | 7 - Thomas Smith | 7 - James Williams | 7 -(10 rows) - -Time: 20ms total (execution 20ms / network 0ms) -~~~ - -This query is now running much faster than it was before you added the indexes (111ms vs. 20ms). This means you have an extra 91 milliseconds you can budget towards other areas of your application. - -To see what changed, look at the [`EXPLAIN`](explain.html) output: - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPLAIN SELECT - name, count(rides.id) AS sum -FROM - users JOIN rides ON users.id = rides.rider_id -WHERE - rides.start_time BETWEEN '2018-12-31 00:00:00' AND '2020-01-01 00:00:00' -GROUP BY - name -ORDER BY - sum DESC -LIMIT - 10; -~~~ - -As you can see, this query is no longer scanning the entire (larger) `rides` table. Instead, it is now doing a much smaller range scan against only the values in `rides` that match the index you just created on the `start_time` column (12,863 rows instead of 125,000). - -~~~ - info ----------------------------------------------------------------------------------------------------- - distribution: full - vectorized: true - - • limit - │ estimated row count: 10 - │ count: 10 - │ - └── • sort - │ estimated row count: 7,772 - │ order: -count_rows - │ - └── • group - │ estimated row count: 7,772 - │ group by: name - │ - └── • hash join - │ estimated row count: 12,863 - │ equality: (rider_id) = (id) - │ - ├── • scan - │ estimated row count: 12,863 (10% of the table; stats collected 5 minutes ago) - │ table: rides@rides_start_time_idx - │ spans: [/'2018-12-31 00:00:00' - /'2020-01-01 00:00:00'] - │ - └── • scan - estimated row count: 12,500 (100% of the table; stats collected 6 minutes ago) - table: users@users_pkey - spans: FULL SCAN -(28 rows) - - -Time: 2ms total (execution 2ms / network 1ms) -~~~ - -## Rule 3. Use the right join type - -Out of the box, the [cost-based optimizer](cost-based-optimizer.html) will select the right join type for your statement in the majority of cases. Therefore, you should only provide [join hints](cost-based-optimizer.html#join-hints) in your query if you can **prove** to yourself through experimentation that the optimizer should be using a different [join type](joins.html#join-algorithms) than it is selecting. - -You can confirm that in this case the optimizer has already found the right join type for this statement by using a hint to force another join type. - -For example, you might think that a [lookup join](joins.html#lookup-joins) could perform better in this instance, since one of the tables in the join is 10x smaller than the other. - -In order to get CockroachDB to plan a lookup join in this case, you will need to add an explicit index on the join key for the right-hand-side table, in this case, `rides`. - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE INDEX ON rides (rider_id); -~~~ - -Next, you can specify the lookup join with a join hint: - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT - name, count(rides.id) AS sum -FROM - users INNER LOOKUP JOIN rides ON users.id = rides.rider_id -WHERE - (rides.start_time BETWEEN '2018-12-31 00:00:00' AND '2020-01-01 00:00:00') -GROUP BY - name -ORDER BY - sum DESC -LIMIT - 10; -~~~ - -~~~ - name | sum -+------------------+-----+ - William Brown | 14 - William Mitchell | 10 - Joseph Smith | 10 - Paul Nelson | 9 - Christina Smith | 9 - Jeffrey Walker | 8 - Jennifer Johnson | 8 - Joseph Jones | 7 - Thomas Smith | 7 - James Williams | 7 -(10 rows) - - -Time: 985ms total (execution 985ms / network 0ms) -~~~ - -The results, however, are not good. The query is much slower using a lookup join than what CockroachDB planned for you earlier. - -The query is faster when you force CockroachDB to use a merge join: - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT - name, count(rides.id) AS sum -FROM - users INNER MERGE JOIN rides ON users.id = rides.rider_id -WHERE - (rides.start_time BETWEEN '2018-12-31 00:00:00' AND '2020-01-01 00:00:00') -GROUP BY - name -ORDER BY - sum DESC -LIMIT - 10; -~~~ - -~~~ - name | sum -+------------------+-----+ - William Brown | 14 - William Mitchell | 10 - Joseph Smith | 10 - Paul Nelson | 9 - Christina Smith | 9 - Jennifer Johnson | 8 - Jeffrey Walker | 8 - Joseph Jones | 7 - Thomas Smith | 7 - James Williams | 7 -(10 rows) - - -Time: 23ms total (execution 22ms / network 0ms) -~~~ - -The results are consistently about 20-26ms with a merge join versus 16-23ms when you let CockroachDB choose the join type as shown in the previous section. In other words, forcing the merge join is slightly slower than if you had done nothing. - -## See also - -- [SQL Best Practices](performance-best-practices-overview.html) -- [Troubleshoot SQL Behavior](query-behavior-troubleshooting.html) diff --git a/src/current/v22.1/architecture/distribution-layer.md b/src/current/v22.1/architecture/distribution-layer.md deleted file mode 100644 index 681b56e53be..00000000000 --- a/src/current/v22.1/architecture/distribution-layer.md +++ /dev/null @@ -1,242 +0,0 @@ ---- -title: Distribution Layer -summary: The distribution layer of CockroachDB's architecture provides a unified view of your cluster's data. -toc: true -docs_area: reference.architecture ---- - -The distribution layer of CockroachDB's architecture provides a unified view of your cluster's data. - -{{site.data.alerts.callout_info}} -If you haven't already, we recommend reading the [Architecture Overview](overview.html). -{{site.data.alerts.end}} - -## Overview - -To make all data in your cluster accessible from any node, CockroachDB stores data in a monolithic sorted map of key-value pairs. This key-space describes all of the data in your cluster, as well as its location, and is divided into what we call "ranges", contiguous chunks of the key-space, so that every key can always be found in a single range. - -CockroachDB implements a sorted map to enable: - - - **Simple lookups**: Because we identify which nodes are responsible for certain portions of the data, queries are able to quickly locate where to find the data they want. - - **Efficient scans**: By defining the order of data, it's easy to find data within a particular range during a scan. - -### Monolithic sorted map structure - -The monolithic sorted map is comprised of two fundamental elements: - -- System data, which include **meta ranges** that describe the locations of data in your cluster (among many other cluster-wide and local data elements) -- User data, which store your cluster's **table data** - -#### Meta ranges - -The locations of all ranges in your cluster are stored in a two-level index at the beginning of your key-space, known as meta ranges, where the first level (`meta1`) addresses the second, and the second (`meta2`) addresses data in the cluster. - -This two-level index plus user data can be visualized as a tree, with the root at `meta1`, the second level at `meta2`, and the leaves of the tree made up of the ranges that hold user data. - -![range-lookup.png](../../images/{{page.version.version}}/range-lookup.png "Meta ranges plus user data tree diagram") - -Importantly, every node has information on where to locate the `meta1` range (known as its range descriptor, detailed below), and the range is never split. - -This meta range structure lets us address up to 4EiB of user data by default: we can address 2^(18 + 18) = 2^36 ranges; each range addresses 2^26 B, and altogether we address 2^(36+26) B = 2^62 B = 4EiB. However, with larger range sizes, it's possible to expand this capacity even further. - -Meta ranges are treated mostly like normal ranges and are accessed and replicated just like other elements of your cluster's KV data. - -Each node caches values of the `meta2` range it has accessed before, which optimizes access of that data in the future. Whenever a node discovers that its `meta2` cache is invalid for a specific key, the cache is updated by performing a regular read on the `meta2` range. - -#### Table data - -After the node's meta ranges is the KV data your cluster stores. - -Each table and its secondary indexes initially map to a single range, where each key-value pair in the range represents a single row in the table (also called the primary index because the table is sorted by the primary key) or a single row in a secondary index. As soon as a range reaches 512 MiB in size, it splits into two ranges. This process continues as a table and its indexes continue growing. Once a table is split across multiple ranges, it's likely that the table and secondary indexes will be stored in separate ranges. However, a range can still contain data for both the table and a secondary index. - -The default 512 MiB range size represents a sweet spot for us between a size that's small enough to move quickly between nodes, but large enough to store a meaningfully contiguous set of data whose keys are more likely to be accessed together. These ranges are then shuffled around your cluster to ensure survivability. - -These table ranges are replicated (in the aptly named replication layer), and have the addresses of each replica stored in the `meta2` range. - -### Using the monolithic sorted map - -As described in the [meta ranges section](#meta-ranges), the locations of all the ranges in a cluster are stored in a two-level index: - -- The first level (`meta1`) addresses the second level. -- The second level (`meta2`) addresses user data. - -This can also be visualized as a tree, with the root at `meta1`, the second level at `meta2`, and the leaves of the tree made up of the ranges that hold user data. - -When a node receives a request, it looks up the location of the range(s) that include the keys in the request in a bottom-up fashion, starting with the leaves of this tree. This process works as follows: - -1. For each key, the node looks up the location of the range containing the specified key in the second level of range metadata (`meta2`). That information is cached for performance; if the range's location is found in the cache, it is returned immediately. - -2. If the range's location is not found in the cache, the node looks up the location of the range where the actual value of `meta2` resides. This information is also cached; if the location of the `meta2` range is found in the cache, the node sends an RPC to the `meta2` range to get the location of the keys the request wants to operate on, and returns that information. - -3. Finally, if the location of the `meta2` range is not found in the cache, the node looks up the location of the range where the actual value of the first level of range metadata (`meta1`) resides. This lookup always succeeds because the location of `meta1` is distributed among all the nodes in the cluster using a gossip protocol. The node then uses the information from `meta1` to look up the location of `meta2`, and from `meta2` it looks up the locations of the ranges that include the keys in the request. - -Note that the process described above is recursive; every time a lookup is performed, it either (1) gets a location from the cache, or (2) performs another lookup on the value in the next level "up" in the tree. Because the range metadata is cached, a lookup can usually be performed without having to send an RPC to another node. - -Now that the node has the location of the range where the key from the request resides, it sends the KV operations from the request along to the range (using the [`DistSender`](#distsender) machinery) in a [`BatchRequest`](#batchrequest). - -### Interactions with other layers - -In relationship to other layers in CockroachDB, the distribution layer: - -- Receives requests from the transaction layer on the same node. -- Identifies which nodes should receive the request, and then sends the request to the proper node's replication layer. - -## Technical details and components - -### gRPC - -gRPC is the software nodes use to communicate with one another. Because the distribution layer is the first layer to communicate with other nodes, CockroachDB implements gRPC here. - -gRPC requires inputs and outputs to be formatted as protocol buffers (protobufs). To leverage gRPC, CockroachDB implements a protocol-buffer-based API defined in `api.proto`. - -For more information about gRPC, see the [official gRPC documentation](http://www.grpc.io/docs/guides/). - -### BatchRequest - -All KV operation requests are bundled into a [protobuf](https://en.wikipedia.org/wiki/Protocol_Buffers), known as a `BatchRequest`. The destination of this batch is identified in the `BatchRequest` header, as well as a pointer to the request's transaction record. (On the other side, when a node is replying to a `BatchRequest`, it uses a protobuf––`BatchResponse`.) - -This `BatchRequest` is also what's used to send requests between nodes using gRPC, which accepts and sends protocol buffers. - -### DistSender - -The gateway/coordinating node's `DistSender` receives `BatchRequest`s from its own `TxnCoordSender`. `DistSender` is then responsible for breaking up `BatchRequests` and routing a new set of `BatchRequests` to the nodes it identifies contain the data using the system [meta ranges](#meta-ranges). For a description of the process by which this lookup from a key to the node holding the key's range is performed, see [Using the monolithic sorted map](#using-the-monolithic-sorted-map). - -It sends the `BatchRequest`s to the replicas of a range, ordered in expectation of request latency. The leaseholder is tried first, if the request needs it. Requests received by a non-leaseholder may fail with an error pointing at the replica's last known leaseholder. These requests are retried transparently with the updated lease by the gateway node and never reach the client. - -As nodes begin replying to these commands, `DistSender` also aggregates the results in preparation for returning them to the client. - -### Meta range KV structure - -Like all other data in your cluster, meta ranges are structured as KV pairs. Both meta ranges have a similar structure: - -~~~ -metaX/successorKey -> [list of nodes containing data] -~~~ - -Element | Description ---------|------------------------ -`metaX` | The level of meta range. Here we use a simplified `meta1` or `meta2`, but these are actually represented in `cockroach` as `\x02` and `\x03` respectively. -`successorKey` | The first key *greater* than the key you're scanning for. This makes CockroachDB's scans efficient; it simply scans the keys until it finds a value greater than the key it's looking for, and that is where it finds the relevant data.

The `successorKey` for the end of a keyspace is identified as `maxKey`. - -Here's an example: - -~~~ -meta2/M -> node1:26257, node2:26257, node3:26257 -~~~ - -In this case, the replica on `node1` is the leaseholder, and nodes 2 and 3 also contain replicas. - -#### Example - -Let's imagine we have an alphabetically sorted column, which we use for lookups. Here are what the meta ranges would approximately look like: - -1. `meta1` contains the address for the nodes containing the `meta2` replicas. - - ~~~ - # Points to meta2 range for keys [A-M) - meta1/M -> node1:26257, node2:26257, node3:26257 - - # Points to meta2 range for keys [M-Z] - meta1/maxKey -> node4:26257, node5:26257, node6:26257 - ~~~ - -2. `meta2` contains addresses for the nodes containing the replicas of each range in the cluster: - - ~~~ - # Contains [A-G) - meta2/G -> node1:26257, node2:26257, node3:26257 - - # Contains [G-M) - meta2/M -> node1:26257, node2:26257, node3:26257 - - #Contains [M-Z) - meta2/Z -> node4:26257, node5:26257, node6:26257 - - #Contains [Z-maxKey) - meta2/maxKey-> node4:26257, node5:26257, node6:26257 - ~~~ - -### Table data KV structure - -Key-value data, which represents the data in your tables using the following structure: - -~~~ -/
// -> -~~~ - -The table itself is stored with an `index_id` of 1 for its `PRIMARY KEY` columns, with the rest of the columns in the table considered as stored/covered columns. - -### Range descriptors - -Each range in CockroachDB contains metadata, known as a range descriptor. A range descriptor is comprised of the following: - -- A sequential RangeID -- The keyspace (i.e., the set of keys) the range contains; for example, the first and last `` in the table data KV structure above. This determines the `meta2` range's keys. -- The addresses of nodes containing replicas of the range. This determines the `meta2` range's key's values. - -Because range descriptors comprise the key-value data of the `meta2` range, each node's `meta2` cache also stores range descriptors. - -Range descriptors are updated whenever there are: - -- Membership changes to a range's Raft group (discussed in more detail in the [Replication Layer](replication-layer.html#membership-changes-rebalance-repair)) -- Range splits -- Range merges - -All of these updates to the range descriptor occur locally on the range, and then propagate to the `meta2` range. - -### Range splits - -By default, CockroachDB attempts to keep ranges/replicas at the default range size (currently 512 MiB). Once a range reaches that limit, we split it into two smaller ranges (composed of contiguous key spaces). - -During this range split, the node creates a new Raft group containing all of the same members as the range that was split. The fact that there are now two ranges also means that there is a transaction that updates `meta2` with the new keyspace boundaries, as well as the addresses of the nodes using the range descriptor. - -### Range merges - -By default, CockroachDB automatically merges small ranges of data together to form fewer, larger ranges (up to the default range size). This can improve both query latency and cluster survivability. - -#### How range merges work - -As [described above](#overview), CockroachDB splits your cluster's data into many ranges. For example, your cluster might have a range for customers whose IDs are between `[1000, 2000)`. If that range grows beyond the default range size, the range is [split into two smaller ranges](#range-splits). - -However, as you delete data from your cluster, a range might contain far less data than the default range size. Over the lifetime of a cluster, this could lead to a number of small ranges. - -To reduce the number of small ranges, your cluster can have any range below a certain size threshold try to merge with its "right-hand neighbor", i.e., the range that starts where the current range ends. Using our example above, this range's right-hand neighbor might be the range for customers whose IDs are between `[2000, 3000)`. - -If the combined size of the small range and its neighbor is less than the maximum range size, the ranges merge into a single range. In our example, this will create a new range of keys `[1000, 3000)`. - -{{site.data.alerts.callout_info}} -When ranges merge, the left-hand-side (LHS) range consumes the right-hand-side (RHS) range. -{{site.data.alerts.end}} - -#### Why range merges improve performance - -##### Query latency - -Queries in CockroachDB must contact a replica of each range involved in the query. This creates the following issues for clusters with many small ranges: - -- Queries incur a fixed overhead in terms of processing time for each range they must coordinate with. -- Having many small ranges can increase the number of machines your query must coordinate with. This exposes your query to a greater likelihood of running into issues like network latency or overloaded nodes. - -By merging small ranges, CockroachDB can greatly reduce the number of ranges involved in queries, thus reducing query latency. - -##### Survivability - -CockroachDB automatically rebalances the distribution of ranges in your cluster whenever nodes come online or go offline. - -During rebalancing, it's better to replicate a few larger ranges across nodes than many smaller ranges. Replicating larger ranges requires less coordination and often completes more quickly. - -By merging smaller ranges together, your cluster needs to rebalance fewer total ranges. This ultimately improves your cluster's performance, especially in the face of availability events like node outages. - -## Technical interactions with other layers - -### Distribution and transaction layer - -The distribution layer's `DistSender` receives `BatchRequests` from its own node's `TxnCoordSender`, housed in the transaction layer. - -### Distribution and replication layer - -The distribution layer routes `BatchRequests` to nodes containing ranges of data, which is ultimately routed to the Raft group leader or leaseholder, which are handled in the replication layer. - -## What's next? - -Learn how CockroachDB copies data and ensures consistency in the [replication layer](replication-layer.html). diff --git a/src/current/v22.1/architecture/glossary.md b/src/current/v22.1/architecture/glossary.md deleted file mode 100644 index ce050ede1d3..00000000000 --- a/src/current/v22.1/architecture/glossary.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: Glossary -summary: Learn about database, CockroachDB architecture and deployment, and CockroachCloud terminology. -toc: true -docs_area: get_started ---- - -This page defines terms that you will encounter throughout the documentation. - -{% include {{ page.version.version }}/misc/database-terms.md %} - -{% include {{ page.version.version }}/misc/basic-terms.md %} - -For more information on CockroachDB architecture, see [Architecture Overview](overview.html#overview). - -## CockroachDB deployment terms - -Term | Definition ------|----------- -**single tenant** | A type of CockroachDB deployment where a single customer uses the database cluster. -**multi-tenant** | A type of CockroachDB deployment where multiple customers share a single storage cluster. Each customer sees a virtual CockroachDB cluster. Data in each virtual cluster is isolated and is invisible to other customers. -**region** | A logical identification of how nodes and data are clustered around [geographical locations](../multiregion-overview.html). A _cluster region_ is the set of locations where cluster nodes are running. A _database region_ is the subset of cluster regions database data should be restricted to. -**availability zone** | A part of a data center that is considered to form a unit with regards to failures and fault tolerance. There can be multiple nodes in a single availability zone, however Cockroach Labs recommends that you to place different replicas of your data in different availability zones. -**[CockroachDB Self-Hosted](../start-a-local-cluster.html)** | A full featured, self-managed CockroachDB deployment. - -{% include common/basic-terms.md %} diff --git a/src/current/v22.1/architecture/life-of-a-distributed-transaction.md b/src/current/v22.1/architecture/life-of-a-distributed-transaction.md deleted file mode 100644 index 392e532b338..00000000000 --- a/src/current/v22.1/architecture/life-of-a-distributed-transaction.md +++ /dev/null @@ -1,188 +0,0 @@ ---- -title: Life of a Distributed Transaction -summary: Learn how a query moves through the layers of CockroachDB's architecture. -toc: true -docs_area: reference.architecture ---- - -Because CockroachDB is a distributed transactional database, the path queries take is dramatically different from many other database architectures. To help familiarize you with CockroachDB's internals, this guide covers what that path actually is. - -If you've already read the [CockroachDB architecture documentation](overview.html), this guide serves as another way to conceptualize how the database works. This time, instead of focusing on the layers of CockroachDB's architecture, we're going to focus on the linear path that a query takes through the system (and then back out again). - -To get the most out of this guide, we recommend beginning with the architecture documentation's [overview](overview.html) and progressing through all of the following sections. This guide provides brief descriptions of each component's function and links to other documentation where appropriate, but assumes the reader has a basic understanding of the architecture in the first place. - -## Overview - -This guide is organized by the physical actors in the system, and then broken down into the components of each actor in the sequence in which they're involved. - -Here's a brief overview of the physical actors, in the sequence with which they're involved in executing a query: - -1. [**SQL Client**](#sql-client-postgresql-wire-protocol) sends a query to your cluster. -1. [**Load Balancing**](#load-balancing-routing) routes the request to CockroachDB nodes in your cluster, which will act as a gateway. -1. [**Gateway**](#gateway) is a CockroachDB node that processes the SQL request and responds to the client. -1. [**Leaseholder**](#leaseholder-node) is a CockroachDB node responsible for serving reads and coordinating writes of a specific range of keys in your query. -1. [**Raft leader**](#raft-leader) is a CockroachDB node responsible for maintaining consensus among your CockroachDB replicas. - -Once the transaction completes, queries traverse these actors in approximately reverse order. We say "approximately" because there might be many leaseholders and Raft leaders involved in a single query, and there is little-to-no interaction with the load balancer during the response. - -## SQL Client/PostgreSQL Wire Protocol - -To begin, a SQL client (e.g., an app) performs some kind of business logic against your CockroachDB cluster, such as inserting a new customer record. - -This request is sent over a connection to your CockroachDB cluster that's established using a PostgreSQL driver. - -## Load Balancing & Routing - -Modern architectures require distributing your cluster across machines to improve throughput, latency, and uptime. This means queries are routed through load balancers, which choose the best CockroachDB node to connect to. Because all CockroachDB nodes have perfectly symmetrical access to data, this means your load balancer can connect your client to any node in the cluster and access any data while still guaranteeing strong consistency. - -Your architecture might also have additional layers of routing to enforce regulatory compliance, such as ensuring GDPR compliance. - -Once your router and load balancer determine the best node to connect to, your client's connection is established to the gateway node. - -## Gateway - -The gateway node handles the connection with the client, both receiving and responding to the request. - -### SQL parsing & planning - -The gateway node first [parses](sql-layer.html#sql-parser-planner-executor) the client's SQL statement to ensure it's valid according to the CockroachDB dialect of SQL, and uses that information to [generate a logical SQL plan](sql-layer.html#logical-planning). - -Given that CockroachDB is a distributed database, though, it's also important to take a cluster's topology into account, so the logical plan is then converted into a physical plan—this means sometimes pushing operations onto the physical machines that contain the data. - -### SQL executor - -While CockroachDB presents a SQL interface to clients, the actual database is built on top of a key-value store. To mediate this, the physical plan generated at the end of SQL parsing is passed to the SQL executor, which executes the plan by performing key-value operations through the `TxnCoordSender`. For example, the SQL executor converts `INSERT `statements into `Put()` operations. - -### TxnCoordSender - -The `TxnCoordSender` provides an API to perform key-value operations on your database. - -On its back end, the `TxnCoordSender` performs a large amount of the accounting and tracking for a transaction, including: - -- Accounts for all keys involved in a transaction. This is used, among other ways, to manage the transaction's state. -- Packages all key-value operations into a `BatchRequest`, which are forwarded on to the node's `DistSender`. - -### DistSender - -The gateway node's `DistSender` receives `BatchRequests` from the `TxnCoordSender`. It dismantles the initial `BatchRequest` by taking each operation and finding which physical machine should receive the request for the range—known as the range's leaseholder. The address of the range's current leaseholder is readily available in both local caches, as well as in the [cluster's `meta` ranges](distribution-layer.html#meta-range-kv-structure). - -These dismantled `BatchRequests` are reassembled into new `BatchRequests` containing the address of the range's leaseholder. - -All write operations also propagate the leaseholder's address back to the `TxnCoordSender`, so it can track and clean up write operations as necessary. - -The `DistSender` sends out the first `BatchRequest` for each range in parallel. As soon as it receives a provisional acknowledgment from the leaseholder node’s evaluator (details below), it sends out the next `BatchRequest` for that range. - -The `DistSender` then waits to receive acknowledgments for all of its write operations, as well as values for all of its read operations. However, this wait isn't necessarily blocking, and the `DistSender` may still perform operations with ongoing transactions. - -## Leaseholder node - -The gateway node's `DistSender` tries to send its `BatchRequests` to the replica identified as the range's [leaseholder](replication-layer.html#leases), which is a single replica that serves all reads for a range, as well as coordinates all writes. Leaseholders play a crucial role in CockroachDB's architecture, so it's a good topic to make sure you're familiar with. - -### Request response - -Because the leaseholder replica can shift between nodes, all nodes must be able to return a request for any key, returning a response indicating one of these scenarios: - -##### No Longer Leaseholder - -If a node is no longer the leaseholder, but still contains a replica of the range, it denies the request but includes the last known address for the leaseholder of that range. - -Upon receipt of this response, the `DistSender` will update the header of the `BatchRequest` with the new address, and then resend the `BatchRequest` to the newly identified leaseholder. - -##### No Longer Has/Never Had Range - -If a node doesn't have a replica for the requested range, it denies the request without providing any further information. - -In this case, the `DistSender` must look up the current leaseholder using the [cluster's `meta` ranges](distribution-layer.html#meta-range-kv-structure). - -##### Success - -Once the node that contains the leaseholder of the range receives the `BatchRequest`, it begins processing it, and progresses onto checking the timestamp cache. - -### Timestamp cache - -The timestamp cache tracks the highest timestamp (i.e., most recent) for any read operation that a given range has served. - -Each write operation in a `BatchRequest` checks its own timestamp versus the timestamp cache to ensure that the write operation has a higher timestamp; this guarantees that history is never rewritten and you can trust that reads always served the most recent data. It's one of the crucial mechanisms CockroachDB uses to ensure serializability. If a write operation fails this check, it must be restarted at a timestamp higher than the timestamp cache's value. - -### Latch manager - -Operations in the `BatchRequest` are serialized through the leaseholder's latch manager. - -This works by giving each write operation a latch on a row. Any reads or writes that come in after the latch has been granted on the row must wait for the write to complete, at which point the latch is released and the subsequent operations can continue. - -### Batch Evaluation - -The batch evaluator ensures that write operations are valid. Our architecture makes this fairly trivial. First, the evaluator can simply check the leaseholder's data to ensure the write is valid; because it has coordinated all writes to the range, it must have the most up-to-date versions of the range's data. Secondly, because of the latch manager, each write operation is guaranteed to uncontested access to the range (i.e., there is no contention with other write operations). - -If the write operation is valid according to the evaluator, the leaseholder sends a provisional acknowledgment to the gateway node's `DistSender`; this lets the `DistSender` begin to send its subsequent `BatchRequests` for this range. - -Importantly, this feature is entirely built for transactional optimization (known as [transaction pipelining](transaction-layer.html#transaction-pipelining)). There are no issues if an operation passes the evaluator but doesn't end up committing. - -### Reads from the storage layer - -All operations (including writes) begin by reading from the local instance of the [storage engine](storage-layer.html) to check for write intents for the operation's key. We talk much more about [write intents in the transaction layer of the CockroachDB architecture](transaction-layer.html#write-intents), which is worth reading, but a simplified explanation is that these are provisional, uncommitted writes that express that some other concurrent transaction plans to write a value to the key. - -What we detail below is a simplified version of the CockroachDB transaction model. For more detail, check out [the transaction architecture documentation](transaction-layer.html). - -#### Resolving Write Intents - -If an operation encounters a write intent for a key, it attempts to "resolve" the write intent by checking the state of the write intent's transaction. If the write intent's transaction record is... - -- `COMMITTED`, this operation converts the write intent to a regular key-value pair, and then proceeds as if it had read that value instead of a write intent. -- `ABORTED`, this operation discards the write intent and reads the next-most-recent value from the storage engine. -- `PENDING`, the new transaction attempts to "push" the write intent's transaction by moving that transaction's timestamp forward (i.e., ahead of this transaction's timestamp); however, this only succeeds if the write intent's transaction has become inactive. - - If the push succeeds, the operation continues. - - If this push fails (which is the majority of the time), this transaction goes into the [`TxnWaitQueue`](transaction-layer.html#txnwaitqueue) on this node. The incoming transaction can only continue once the blocking transaction completes (i.e., commits or aborts). -- `MISSING`, the resolver consults the write intent's timestamp. - - If it was created within the transaction liveness threshold, it treats the transaction record as exhibiting the `PENDING` behavior, with the addition of tracking the push in the range's timestamp cache, which will inform the transaction that its timestamp was pushed once the transaction record gets created. - - If the write intent is older than the transaction liveness threshold, the resolution exhibits the `ABORTED` behavior. - - Note that transaction records might be missing because we've avoided writing the record until the transaction commits. For more information, see [Transaction Layer: Transaction records](transaction-layer.html#transaction-records). - -Check out our architecture documentation for more information about [CockroachDB's transactional model](transaction-layer.html). - -#### Read Operations - -If the read doesn't encounter a write intent and the key-value operation is meant to serve a read, it can simply use the value it read from the leaseholder's instance of the storage engine. This works because the leaseholder had to be part of the Raft consensus group for any writes to complete, meaning it must have the most up-to-date version of the range's data. - -The leaseholder aggregates all read responses into a `BatchResponse` that will get returned to the gateway node's `DistSender`. - -As we mentioned before, each read operation also updates the timestamp cache. - -### Write Operations - -After guaranteeing that there are no existing write intents for the keys, `BatchRequest`'s key-value operations are converted to [Raft operations](replication-layer.html#raft) and have their values converted into write intents. - -The leaseholder then proposes these Raft operations to the Raft group leader. The leaseholder and the Raft leader are almost always the same node, but there are situations where the roles might drift to different nodes. However, when the two roles are not collocated on the same physical machine, CockroachDB will attempt to relocate them on the same node at the next opportunity. - -## Raft Leader - -CockroachDB leverages Raft as its consensus protocol. If you aren't familiar with it, we recommend checking out the details about [how CockroachDB leverages Raft](replication-layer.html#raft), as well as [learning more about how the protocol works at large](http://thesecretlivesofdata.com/raft/). - -In terms of executing transactions, the Raft leader receives proposed Raft commands from the leaseholder. Each Raft command is a write that is used to represent an atomic state change of the underlying key-value pairs stored in the storage engine. - -### Consensus - -For each command the Raft leader receives, it proposes a vote to the other members of the Raft group. - -Once the command achieves consensus (i.e., a majority of nodes including itself acknowledge the Raft command), it is committed to the Raft leader’s Raft log and written to the storage engine. At the same time, the Raft leader also sends a command to all other nodes to include the command in their Raft logs. - -Once the leader commits the Raft log entry, it’s considered committed. At this point the value is considered written, and if another operation comes in and performs a read from the storage engine for this key, they’ll encounter this value. - -Note that this write operation creates a write intent; these writes will not be fully committed until the gateway node sets the transaction record's status to `COMMITTED`. - -## On the way back up - -Now that we have followed an operation all the way down from the SQL client to the storage engine, we can pretty quickly cover what happens on the way back up (i.e., when generating a response to the client). - -1. Once the leaseholder applies a write to its Raft log, - it sends an commit acknowledgment to the gateway node's `DistSender`, which was waiting for this signal (having already received the provisional acknowledgment from the leaseholder's evaluator). -1. The gateway node's `DistSender` aggregates commit acknowledgments from all of the write operations in the `BatchRequest`, as well as any values from read operations that should be returned to the client. -1. Once all operations have successfully completed (i.e., reads have returned values and write intents have been committed), the `DistSender` tries to record the transaction's success in the transaction record (which provides a durable mechanism of tracking the transaction's state), which can cause a few situations to arise: - - It checks the timestamp cache of the range where the first write occurred to see if its timestamp got pushed forward. If it did, the transaction performs a [read refresh](transaction-layer.html#read-refreshing) to see if any values it needed have been changed. If the read refresh is successful, the transaction can commit at the pushed timestamp. If the read refresh fails, the transaction must be restarted. - - If the transaction is in an `ABORTED` state, the `DistSender` sends a response indicating as much, which ends up back at the SQL interface. - - Upon passing these checks the transaction record is either written for the first time with the `COMMITTED` state, or if it was in a `PENDING` state, it is moved to `COMMITTED`. Only at this point is the transaction considered committed. -1. The `DistSender` propagates any values that should be returned to the client (e.g., reads or the number of affected rows) to the `TxnCoordSender`, which in turn responds to the SQL interface with the value. - The `TxnCoordSender` also begins asynchronous intent cleanup by sending a request to the `DistSender` to convert all write intents it created for the transaction to fully committed values. However, this process is largely an optimization; if any operation encounters a write intent, it checks the write intent's transaction record. If the transaction record is `COMMITTED`, the operation can perform the same cleanup and convert the write intent to a fully committed value. -1. The SQL interface then responds to the client, and is now prepared to continue accepting new connections. diff --git a/src/current/v22.1/architecture/overview.md b/src/current/v22.1/architecture/overview.md deleted file mode 100644 index 01871014e59..00000000000 --- a/src/current/v22.1/architecture/overview.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -title: Architecture Overview -summary: Learn about the inner-workings of the CockroachDB architecture. -toc: true -key: cockroachdb-architecture.html -docs_area: reference.architecture ---- - -CockroachDB was designed to create the source-available database we would want to use: one that is both scalable and consistent. Developers often have questions about how we've achieved this, and this guide sets out to detail the inner workings of the `cockroach` process as a means of explanation. - -However, you definitely do not need to understand the underlying architecture to use CockroachDB. These pages give serious users and database enthusiasts a high-level framework to explain what's happening under the hood. - -{{site.data.alerts.callout_success}} -If these docs interest you, consider taking the free [Intro to Distributed SQL](https://university.cockroachlabs.com/courses/course-v1:crl+intro-to-distributed-sql-and-cockroachdb+self-paced/about) course on Cockroach University. -{{site.data.alerts.end}} - -## Using this guide - -This guide is broken out into pages detailing each layer of CockroachDB. We recommended reading through the layers sequentially, starting with this overview and then proceeding to the [SQL layer](sql-layer.html). - -If you're looking for a high-level understanding of CockroachDB, you can read the **Overview** section of each layer. For more technical detail—for example, if you're interested in [contributing to the project](https://cockroachlabs.atlassian.net/wiki/x/QQFdB)—you should read the **Components** sections as well. - -{{site.data.alerts.callout_info}} -This guide details how CockroachDB is built, but does not explain how to build an application using CockroachDB. For more information about how to develop applications that use CockroachDB, check out our [Developer Guide](../developer-guide-overview.html). -{{site.data.alerts.end}} - -## Goals of CockroachDB - -CockroachDB was designed to meet the following goals: - -- Make life easier for humans. This means being low-touch and highly automated for [operators](../recommended-production-settings.html) and simple to reason about for [developers](../developer-guide-overview.html). -- Offer industry-leading consistency, even on massively scaled deployments. This means enabling distributed transactions, as well as removing the pain of eventual consistency issues and stale reads. -- Create an always-on database that accepts reads and writes on all nodes without generating conflicts. -- Allow flexible deployment in any environment, without tying you to any platform or vendor. -- Support familiar tools for working with relational data (i.e., SQL). - -With the confluence of these features, we hope that CockroachDB helps you build global, scalable, resilient deployments and applications. - -It's helpful to understand a few terms before reading our architecture documentation. - -{% include {{ page.version.version }}/misc/database-terms.md %} - -{% include {{ page.version.version }}/misc/basic-terms.md %} - -## Overview - -CockroachDB starts running on machines with two commands: - -- [`cockroach start`](../cockroach-start.html) with a `--join` flag for all of the initial nodes in the cluster, so the process knows all of the other machines it can communicate with. -- [`cockroach init`](../cockroach-init.html) to perform a one-time initialization of the cluster. - -Once the CockroachDB cluster is initialized, developers interact with CockroachDB through a [PostgreSQL-compatible](../postgresql-compatibility.html) SQL API. Thanks to the symmetrical behavior of all nodes in a cluster, you can send [SQL requests](sql-layer.html) to any node; this makes CockroachDB easy to integrate with load balancers. - -After receiving SQL remote procedure calls (RPCs), nodes convert them into key-value (KV) operations that work with our [distributed, transactional key-value store](transaction-layer.html). - -As these RPCs start filling your cluster with data, CockroachDB starts [algorithmically distributing your data among the nodes of the cluster](distribution-layer.html), breaking the data up into 512 MiB chunks that we call ranges. Each range is replicated to at least 3 nodes by default to ensure survivability. This ensures that if any nodes go down, you still have copies of the data which can be used for: - -- Continuing to serve reads and writes. -- Consistently replicating the data to other nodes. - -If a node receives a read or write request it cannot directly serve, it finds the node that can handle the request, and communicates with that node. This means you do not need to know where in the cluster a specific portion of your data is stored; CockroachDB tracks it for you, and enables symmetric read/write behavior from each node. - -Any changes made to the data in a range rely on a [consensus algorithm](replication-layer.html) to ensure that the majority of the range's replicas agree to commit the change. This is how CockroachDB achieves the industry-leading isolation guarantees that allow it to provide your application with consistent reads and writes, regardless of which node you communicate with. - -Ultimately, data is written to and read from disk using an efficient [storage engine](storage-layer.html), which is able to keep track of the data's timestamp. This has the benefit of letting us support the SQL standard [`AS OF SYSTEM TIME`](../as-of-system-time.html) clause, letting you find historical data for a period of time. - -While the high-level overview above gives you a notion of what CockroachDB does, looking at how CockroachDB operates at each of these layers will give you much greater understanding of our architecture. - -### Layers - -At the highest level, CockroachDB converts clients' SQL statements into key-value (KV) data, which is distributed among nodes and written to disk. CockroachDB's architecture is manifested as a number of layers, each of which interacts with the layers directly above and below it as relatively opaque services. - -The following pages describe the function each layer performs, while mostly ignoring the details of other layers. This description is true to the experience of the layers themselves, which generally treat the other layers as black-box APIs. There are some interactions that occur between layers that require an understanding of each layer's function to understand the entire process. - -Layer | Order | Purpose -------|------------|-------- -[SQL](sql-layer.html) | 1 | Translate client SQL queries to KV operations. -[Transactional](transaction-layer.html) | 2 | Allow atomic changes to multiple KV entries. -[Distribution](distribution-layer.html) | 3 | Present replicated KV ranges as a single entity. -[Replication](replication-layer.html) | 4 | Consistently and synchronously replicate KV ranges across many nodes. This layer also enables consistent reads using a consensus algorithm. -[Storage](storage-layer.html) | 5 | Read and write KV data on disk. - -## What's next? - -Start by learning about what CockroachDB does with your SQL statements at the [SQL layer](sql-layer.html). diff --git a/src/current/v22.1/architecture/reads-and-writes-overview.md b/src/current/v22.1/architecture/reads-and-writes-overview.md deleted file mode 100644 index 8ac155b39e9..00000000000 --- a/src/current/v22.1/architecture/reads-and-writes-overview.md +++ /dev/null @@ -1,68 +0,0 @@ ---- -title: Reads and Writes in CockroachDB -summary: Learn how reads and writes are affected by the replicated and distributed nature of data in CockroachDB. -toc: true -docs_area: reference.architecture ---- - -This page explains how reads and writes are affected by the replicated and distributed nature of data in CockroachDB. It starts by summarizing how CockroachDB executes queries and then guides you through a few simple read and write scenarios. - -{{site.data.alerts.callout_info}} -For a more detailed information about how transactions work in CockroachDB, see the [Transaction Layer](transaction-layer.html) documentation. -{{site.data.alerts.end}} - -{% include {{ page.version.version }}/misc/basic-terms.md %} - -## Query execution - -When CockroachDB executes a query, the cluster routes the request to the leaseholder for the range containing the relevant data. If the query touches multiple ranges, the request goes to multiple leaseholders. For a read request, only the leaseholder of the relevant range retrieves the data. For a write request, the Raft consensus protocol dictates that a majority of the replicas of the relevant range must agree before the write is committed. - -Let's consider how these mechanics play out in some hypothetical queries. - -## Read scenario - -First, imagine a simple read scenario where: - -- There are 3 nodes in the cluster. -- There are 3 small tables, each fitting in a single range. -- Ranges are replicated 3 times (the default). -- A query is executed against node 2 to read from table 3. - -Perf tuning concepts - -In this case: - -1. Node 2 (the gateway node) receives the request to read from table 3. -2. The leaseholder for table 3 is on node 3, so the request is routed there. -3. Node 3 returns the data to node 2. -4. Node 2 responds to the client. - -If the query is received by the node that has the leaseholder for the relevant range, there are fewer network hops: - -Perf tuning concepts - -## Write scenario - -Now imagine a simple write scenario where a query is executed against node 3 to write to table 1: - -Perf tuning concepts - -In this case: - -1. Node 3 (the gateway node) receives the request to write to table 1. -2. The leaseholder for table 1 is on node 1, so the request is routed there. -3. The leaseholder is the same replica as the Raft leader (as is typical), so it simultaneously appends the write to its own Raft log and notifies its follower replicas on nodes 2 and 3. -4. As soon as one follower has appended the write to its Raft log (and thus a majority of replicas agree based on identical Raft logs), it notifies the leader and the write is committed to the key-values on the agreeing replicas. In this diagram, the follower on node 2 acknowledged the write, but it could just as well have been the follower on node 3. Also note that the follower not involved in the consensus agreement usually commits the write very soon after the others. -5. Node 1 returns acknowledgement of the commit to node 3. -6. Node 3 responds to the client. - -Just as in the read scenario, if the write request is received by the node that has the leaseholder and Raft leader for the relevant range, there are fewer network hops: - -Perf tuning concepts - -## Network and I/O bottlenecks - -With the above examples in mind, it's always important to consider network latency and disk I/O as potential performance bottlenecks. In summary: - -- For reads, hops between the gateway node and the leaseholder add latency. -- For writes, hops between the gateway node and the leaseholder/Raft leader, and hops between the leaseholder/Raft leader and Raft followers, add latency. In addition, since Raft log entries are persisted to disk before a write is committed, disk I/O is important. diff --git a/src/current/v22.1/architecture/replication-layer.md b/src/current/v22.1/architecture/replication-layer.md deleted file mode 100644 index 5aa232d2e02..00000000000 --- a/src/current/v22.1/architecture/replication-layer.md +++ /dev/null @@ -1,226 +0,0 @@ ---- -title: Replication Layer -summary: The replication layer of CockroachDB's architecture copies data between nodes and ensures consistency between copies. -toc: true -docs_area: reference.architecture ---- - -The replication layer of CockroachDB's architecture copies data between nodes and ensures consistency between these copies by implementing our consensus algorithm. - -{{site.data.alerts.callout_info}} -If you haven't already, we recommend reading the [Architecture Overview](overview.html). -{{site.data.alerts.end}} - -## Overview - -High availability requires that your database can tolerate nodes going offline without interrupting service to your application. This means replicating data between nodes to ensure the data remains accessible. - -Ensuring consistency with nodes offline, though, is a challenge many databases fail. To solve this problem, CockroachDB uses a consensus algorithm to require that a quorum of replicas agrees on any changes to a range before those changes are committed. Because 3 is the smallest number that can achieve quorum (i.e., 2 out of 3), CockroachDB's high availability (known as multi-active availability) requires 3 nodes. - -The number of failures that can be tolerated is equal to *(Replication factor - 1)/2*. For example, with 3x replication, one failure can be tolerated; with 5x replication, two failures, and so on. You can control the replication factor at the cluster, database, and table level using [replication zones](../configure-replication-zones.html). - -When failures happen, though, CockroachDB automatically realizes nodes have stopped responding and works to redistribute your data to continue maximizing survivability. This process also works the other way around: when new nodes join your cluster, data automatically rebalances onto it, ensuring your load is evenly distributed. - -### Interactions with other layers - -In relationship to other layers in CockroachDB, the replication layer: - -- Receives requests from and sends responses to the distribution layer. -- Writes accepted requests to the storage layer. - -## Components - -### Raft - -Raft is a consensus protocol––an algorithm which makes sure that your data is safely stored on multiple machines, and that those machines agree on the current state even if some of them are temporarily disconnected. - -Raft organizes all nodes that contain a [replica](overview.html#architecture-replica) of a [range](overview.html#architecture-range) into a group--unsurprisingly called a Raft group. Each replica in a Raft group is either a "leader" or a "follower". The leader, which is elected by Raft and long-lived, coordinates all writes to the Raft group. It heartbeats followers periodically and keeps their logs replicated. In the absence of heartbeats, followers become candidates after randomized election timeouts and proceed to hold new leader elections. - - A third replica type is introduced, the "non-voting" replica. These replicas do not participate in Raft elections, but are useful for unlocking use cases that require low-latency multi-region reads. For more information, see [Non-voting replicas](#non-voting-replicas). - -Once a node receives a `BatchRequest` for a range it contains, it converts those KV operations into Raft commands. Those commands are proposed to the Raft group leader––which is what makes it ideal for the [leaseholder](#leases) and the Raft leader to be one in the same––and written to the Raft log. - -For a great overview of Raft, we recommend [The Secret Lives of Data](http://thesecretlivesofdata.com/raft/). - -#### Raft logs - -When writes receive a quorum, and are committed by the Raft group leader, they're appended to the Raft log. This provides an ordered set of commands that the replicas agreed on and is essentially the source of truth for consistent replication. - -Because this log is treated as serializable, it can be replayed to bring a node from a past state to its current state. This log also lets nodes that temporarily went offline to be "caught up" to the current state without needing to receive a copy of the existing data in the form of a snapshot. - -#### Non-voting replicas - -In versions prior to v21.1, CockroachDB only supported _voting_ replicas: that is, [replicas](overview.html#architecture-replica) that participate as voters in the [Raft consensus protocol](#raft). However, the need for all replicas to participate in the consensus algorithm meant that increasing the [replication factor](../configure-replication-zones.html#num_replicas) came at a cost of increased write latency, since the additional replicas needed to participate in Raft [quorum](overview.html#architecture-overview-consensus). - - In order to provide [better support for multi-region clusters](../multiregion-overview.html) (including the features that make [fast multi-region reads](../multiregion-overview.html#global-tables) and [surviving region failures](../multiregion-overview.html#surviving-region-failures) possible), a new type of replica is introduced: the _non-voting_ replica. - -Non-voting replicas follow the [Raft log](#raft-logs) (and are thus able to serve [follower reads](../follower-reads.html)), but do not participate in quorum. They have almost no impact on write latencies. - -They are also sometimes referred to as [_read-only_ replicas](https://cloud.google.com/spanner/docs/replication#read-only), since they only serve reads, but do not participate in quorum (and thus do not incur the associated latency costs). - -Non-voting replicas can be configured via [zone configurations through `num_voters` and `num_replicas`](../configure-replication-zones.html#num_voters). When `num_voters` is configured to be less than `num_replicas`, the difference dictates the number of non-voting replicas. However, most users should control non-voting replica placement with the high-level [multi-region SQL features](../multiregion-overview.html) instead. - -#### Per-replica circuit breakers - -- [Overview](#per-replica-circuit-breaker-overview) -- [Configuration](#per-replica-circuit-breaker-configuration) -- [Limitations](#per-replica-circuit-breaker-limitations) - - - -##### Overview - -{% include_cached new-in.html version="v22.1" %} When individual [ranges](overview.html#architecture-range) become temporarily unavailable, requests to those ranges are refused by a per-replica "circuit breaker" mechanism instead of hanging indefinitely. - -From a user's perspective, this means that if a [SQL query](sql-layer.html) is going to ultimately fail due to accessing a temporarily unavailable range, a [replica](overview.html#architecture-replica) in that range will trip its circuit breaker (after 60 seconds [by default](#per-replica-circuit-breaker-timeout)) and bubble a `ReplicaUnavailableError` error back up through the system to inform the user why their query did not succeed. These (hopefully transient) errors are also signalled as events in the DB Console's [Replication Dashboard](../ui-replication-dashboard.html) and as "circuit breaker errors" in its [**Problem Ranges** and **Range Status** pages](../ui-debug-pages.html). Meanwhile, CockroachDB continues asynchronously probing the range's availability. If the replica becomes available again, the breaker is reset so that it can go back to serving requests normally. - -This feature is designed to increase the availability of your CockroachDB clusters by making them more robust to transient errors. - -For more information about per-replica circuit breaker events happening on your cluster, see the following pages in the [DB Console](../ui-overview.html): - -- The [**Replication** dashboard](../ui-replication-dashboard.html). -- The [**Advanced Debug** page](../ui-debug-pages.html). From there you can view the **Problem Ranges** page, which lists the range replicas whose circuit breakers were tripped. You can also view the **Range Status** page, which displays the circuit breaker error message for a given range. - - - -##### Configuration - -Per-replica circuit breakers are enabled by default. Most users will not have to configure anything to get the benefits of this feature. - - - -The circuit breaker timeout value is controlled by the `kv.replica_circuit_breaker.slow_replication_threshold` [cluster setting](../cluster-settings.html), which defaults to an [interval](../interval.html) of `1m0s` (1 minute). - - - -##### Limitations - -Per-replica circuit breakers have the following limitations: - -- They cannot prevent requests from hanging when the node's [liveness range](#epoch-based-leases-table-data) is unavailable. For more information about troubleshooting a cluster that's having node liveness issues, see [Node liveness issues](../cluster-setup-troubleshooting.html#node-liveness-issues). -- They are not tripped if _all_ replicas of a range [become unavailable](../cluster-setup-troubleshooting.html#db-console-shows-under-replicated-unavailable-ranges), because the circuit breaker mechanism operates per-replica. This means at least one replica needs to be available to receive the request in order for the breaker to trip. - -### Snapshots - -Each replica can be "snapshotted", which copies all of its data as of a specific timestamp (available because of [MVCC](storage-layer.html#mvcc)). This snapshot can be sent to other nodes during a rebalance event to expedite replication. - -After loading the snapshot, the node gets up to date by replaying all actions from the Raft group's log that have occurred since the snapshot was taken. - -### Leases - -A single node in the Raft group acts as the leaseholder, which is the only node that can serve reads or propose writes to the Raft group leader (both actions are received as `BatchRequests` from [`DistSender`](distribution-layer.html#distsender)). - -CockroachDB attempts to elect a leaseholder who is also the Raft group leader, which can also optimize the speed of writes. - -If there is no leaseholder, any node receiving a request will attempt to become the leaseholder for the range. To prevent two nodes from acquiring the lease, the requester includes a copy of the last valid lease it had; if another node became the leaseholder, its request is ignored. - -When serving [strongly-consistent (aka "non-stale") reads](transaction-layer.html#reading), leaseholders bypass Raft; for the leaseholder's writes to have been committed in the first place, they must have already achieved consensus, so a second consensus on the same data is unnecessary. This has the benefit of not incurring latency from networking round trips required by Raft and greatly increases the speed of reads (without sacrificing consistency). - -#### Co-location with Raft leadership - -The range lease is completely separate from Raft leadership, and so without further efforts, Raft leadership and the range lease might not be held by the same replica. However, we can optimize query performance by making the same node both Raft leader and the leaseholder; it reduces network round trips if the leaseholder receiving the requests can simply propose the Raft commands to itself, rather than communicating them to another node. - -To achieve this, each lease renewal or transfer also attempts to collocate them. In practice, that means that the mismatch is rare and self-corrects quickly. - -#### Epoch-based leases (table data) - -To manage leases for table data, CockroachDB implements a notion of "epochs," which are defined as the period between a node joining a cluster and a node disconnecting from a cluster. To extend its leases, each node must periodically update its liveness record, which is stored on a system range key. When a node disconnects, it stops updating the liveness record, and the epoch is considered changed. This causes the node to [lose all of its leases](#how-leases-are-transferred-from-a-dead-node) a few seconds later when the liveness record expires. - -Because leases do not expire until a node disconnects from a cluster, leaseholders do not have to individually renew their own leases. Tying lease lifetimes to node liveness in this way lets us eliminate a substantial amount of traffic and Raft processing we would otherwise incur, while still tracking leases for every range. - -#### Expiration-based leases (meta and system ranges) - -A table's meta and system ranges (detailed in the [distribution layer](distribution-layer.html#meta-ranges)) are treated as normal key-value data, and therefore have leases just like table data. - -However, unlike table data, system ranges cannot use epoch-based leases because that would create a circular dependency: system ranges are already being used to implement epoch-based leases for table data. Therefore, system ranges use expiration-based leases instead. Expiration-based leases expire at a particular timestamp (typically after a few seconds). However, as long as a node continues proposing Raft commands, it continues to extend the expiration of its leases. If it doesn't, the next node containing a replica of the range that tries to read from or write to the range will become the leaseholder. - -#### How leases are transferred from a dead node - -When the cluster needs to access a range on a leaseholder node that is dead, that range's lease must be transferred to a healthy node. This process is as follows: - -1. The dead node's liveness record, which is stored in a system range, has an expiration time of 9 seconds, and is heartbeated every 4.5 seconds. When the node dies, the amount of time the cluster has to wait for the record to expire varies, but on average is 6.75 seconds. -1. A healthy node attempts to acquire the lease. This is rejected because lease acquisition can only happen on the Raft leader, which the healthy node is not (yet). Therefore, a Raft election must be held. -1. The rejected attempt at lease acquisition [unquiesces](../ui-replication-dashboard.html#replica-quiescence) ("wakes up") the range associated with the lease. -1. What happens next depends on whether the lease is on [table data](#epoch-based-leases-table-data) or [meta ranges or system ranges](#expiration-based-leases-meta-and-system-ranges): - - If the lease is on [meta or system ranges](#expiration-based-leases-meta-and-system-ranges), the node that unquiesced the range checks if the Raft leader is alive according to the liveness record. If the leader is not alive, it kicks off a campaign to try and win Raft leadership so it can become the leaseholder. - - If the lease is on [table data](#epoch-based-leases-table-data), the "is the leader alive?" check described above is skipped and an election is called immediately. The check is skipped since it would introduce a circular dependency on the liveness record used for table data, which is itself stored in a system range. -1. The Raft election is held and a new leader is chosen from among the healthy nodes. -1. The lease acquisition can now be processed by the newly elected Raft leader. - -This process should take no more than 9 seconds for liveness expiration plus the cost of 2 network roundtrips: 1 for Raft leader election, and 1 for lease acquisition. - -Finally, note that the process described above is lazily initiated: it only occurs when a new request comes in for the range associated with the lease. - -#### Leaseholder rebalancing - -Because CockroachDB serves reads from a range's leaseholder, it benefits your cluster's performance if the replica closest to the primary geographic source of traffic holds the lease. However, as traffic to your cluster shifts throughout the course of the day, you might want to dynamically shift which nodes hold leases. - -{{site.data.alerts.callout_info}} - -This feature is also called [Follow-the-Workload](../topology-follow-the-workload.html) in our documentation. - -{{site.data.alerts.end}} - -Periodically (every 10 minutes by default in large clusters, but more frequently in small clusters), each leaseholder considers whether it should transfer the lease to another replica by considering the following inputs: - -- Number of requests from each locality -- Number of leases on each node -- Latency between localities - -##### Intra-locality - -If all the replicas are in the same locality, the decision is made entirely on the basis of the number of leases on each node that contains a replica, trying to achieve a roughly equitable distribution of leases across all of them. This means the distribution isn't perfectly equal; it intentionally tolerates small deviations between nodes to prevent thrashing (i.e., excessive adjustments trying to reach an equilibrium). - -##### Inter-locality - -If replicas are in different localities, CockroachDB attempts to calculate which replica would make the best leaseholder, i.e., provide the lowest latency. - -To enable dynamic leaseholder rebalancing, a range's current leaseholder tracks how many requests it receives from each locality as an exponentially weighted moving average. This calculation results in the locality that has recently requested the range most often being assigned the greatest weight. If another locality then begins requesting the range very frequently, this calculation would shift to assign the second region the greatest weight. - -When checking for leaseholder rebalancing opportunities, the leaseholder correlates each requesting locality's weight (i.e., the proportion of recent requests) to the locality of each replica by checking how similar the localities are. For example, if the leaseholder received requests from gateway nodes in locality `country=us,region=central`, CockroachDB would assign the following weights to replicas in the following localities: - -Replica locality | Replica rebalancing weight ------------------|------------------- -`country=us,region=central` | 100% because it is an exact match -`country=us,region=east` | 50% because only the first locality matches -`country=aus,region=central` | 0% because the first locality does not match - -The leaseholder then evaluates its own weight and latency versus the other replicas to determine an adjustment factor. The greater the disparity between weights and the larger the latency between localities, the more CockroachDB favors the node from the locality with the larger weight. - -When checking for leaseholder rebalancing opportunities, the current leaseholder evaluates each replica's rebalancing weight and adjustment factor for the localities with the greatest weights. If moving the leaseholder is both beneficial and viable, the current leaseholder will transfer the lease to the best replica. - -##### Controlling leaseholder rebalancing - -You can control leaseholder rebalancing through the `kv.allocator.load_based_lease_rebalancing.enabled` [cluster setting](../cluster-settings.html). Note that depending on the needs of your deployment, you can exercise additional control over the location of leases and replicas by [configuring replication zones](../configure-replication-zones.html). - -### Membership changes: rebalance/repair - -Whenever there are changes to a cluster's number of nodes, the members of Raft groups change and, to ensure optimal survivability and performance, replicas need to be rebalanced. What that looks like varies depending on whether the membership change is nodes being added or going offline. - -- **Nodes added**: The new node communicates information about itself to other nodes, indicating that it has space available. The cluster then rebalances some replicas onto the new node. - -- **Nodes going offline**: If a member of a Raft group ceases to respond, after 5 minutes, the cluster begins to rebalance by replicating the data the downed node held onto other nodes. - -Rebalancing is achieved by using a snapshot of a replica from the leaseholder, and then sending the data to another node over [gRPC](distribution-layer.html#grpc). After the transfer has been completed, the node with the new replica joins that range's Raft group; it then detects that its latest timestamp is behind the most recent entries in the Raft log and it replays all of the actions in the Raft log on itself. - -#### Load-based replica rebalancing - -In addition to the rebalancing that occurs when nodes join or leave a cluster, replicas are also rebalanced automatically based on the relative load across the nodes within a cluster. For more information, see the `kv.allocator.load_based_rebalancing` and `kv.allocator.qps_rebalance_threshold` [cluster settings](../cluster-settings.html). Note that depending on the needs of your deployment, you can exercise additional control over the location of leases and replicas by [configuring replication zones](../configure-replication-zones.html). - -## Interactions with other layers - -### Replication and distribution layers - -The replication layer receives requests from its and other nodes' `DistSender`. If this node is the leaseholder for the range, it accepts the requests; if it isn't, it returns an error with a pointer to which node it believes *is* the leaseholder. These KV requests are then turned into Raft commands. - -The replication layer sends `BatchResponses` back to the distribution layer's `DistSender`. - -### Replication and storage layers - -Committed Raft commands are written to the Raft log and ultimately stored on disk through the storage layer. - -The leaseholder serves reads from the storage layer. - -## What's next? - -Learn how CockroachDB reads and writes data from disk in the [storage layer](storage-layer.html). diff --git a/src/current/v22.1/architecture/sql-layer.md b/src/current/v22.1/architecture/sql-layer.md deleted file mode 100644 index b74d1cb23e3..00000000000 --- a/src/current/v22.1/architecture/sql-layer.md +++ /dev/null @@ -1,159 +0,0 @@ ---- -title: SQL Layer -summary: The SQL layer of CockroachDB's architecture exposes its SQL API to developers and converts SQL statements into key-value operations. -toc: true -docs_area: reference.architecture ---- - -The SQL layer of CockroachDB's architecture exposes a SQL API to developers and converts high-level [SQL statements](../sql-statements.html) into low-level read and write requests to the underlying key-value store, which are passed to the [transaction Layer](transaction-layer.html). - -It consists of the following sublayers: - -- [SQL API](#sql-api), which forms the user interface. -- [Parser](#parsing), which converts SQL text into an abstract syntax tree (AST). -- [Cost-based optimizer](#logical-planning), which converts the AST into an optimized logical query plan. -- [Physical planner](#physical-planning), which converts the logical query plan into a physical query plan for execution by one or more nodes in the cluster. -- [SQL execution engine](#query-execution), which executes the physical plan by making read and write requests to the underlying key-value store. - -{{site.data.alerts.callout_info}} -If you haven't already, we recommend reading the [Architecture Overview](overview.html). -{{site.data.alerts.end}} - -## Overview - -Once CockroachDB has been [deployed](../../cockroachcloud/quickstart.html), developers need only a [connection string](../connection-parameters.html) to the cluster, and they can start working with SQL statements. - - - -Because each node in a CockroachDB cluster behaves symmetrically, developers can send requests to any node (which means CockroachDB works well with load balancers). Whichever node receives the request acts as the "gateway node," which processes the request and responds to the client. - -Requests to the cluster arrive as SQL statements, but data is ultimately written to and read from the [storage layer](storage-layer.html) as key-value (KV) pairs. To handle this, the SQL layer converts SQL statements into a plan of KV operations, which it then passes along to the [transaction layer](transaction-layer.html). - -### Interactions with other layers - -In relationship to other layers in CockroachDB, the SQL layer: - -- Receives requests from the outside world via its [SQL API](#sql-api). -- Converts SQL statements into low-level KV operations, which it sends as requests to the [transaction layer](transaction-layer.html). - -## Components - -### Relational structure - -Developers experience data stored in CockroachDB as a relational structure comprised of rows and columns. Sets of rows and columns are further organized into [tables](../show-tables.html). Collections of tables are then organized into [databases](../show-databases.html). A CockroachDB cluster can contain many databases. - -CockroachDB provides typical relational features like [constraints](../constraints.html) (e.g., [foreign keys](../foreign-key.html)). These features mean that application developers can trust that the database will ensure consistent structuring of the application's data; data validation doesn't need to be built into the application logic separately. - -### SQL API - -CockroachDB implements most of the ANSI SQL standard to manifest its relational structure. For a complete list of the SQL features CockroachDB supports, see [SQL Feature Support](../sql-feature-support.html). - -Importantly, through the SQL API developers have access to ACID-semantic [transactions](../transactions.html) like they would through any SQL database (using [`BEGIN`](../begin-transaction.html), [`COMMIT`](../commit-transaction.html), etc.). - -### PostgreSQL wire protocol - -SQL queries reach your cluster through the PostgreSQL wire protocol. This makes connecting your application to the cluster simple by supporting many PostgreSQL-compatible [drivers and ORMs](../install-client-drivers.html). - -### SQL parser, planner, executor - -When a node in a CockroachDB cluster receives a SQL request from a client, it [parses the statement](#parsing) and [creates an optimized logical query plan](#logical-planning) that is further translated into a [physical query plan](#physical-planning). Finally, it [executes the physical plan](#query-execution). - -#### Parsing - -SQL queries are parsed against our `yacc` file (which describes our supported syntax), and the SQL version of each query is converted into an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree) (AST). - -#### Logical planning - -During the *logical planning* phase, the AST is transformed into a query plan in the following steps: - -1. The AST is transformed into a high-level logical query plan. During this transformation, CockroachDB also performs [semantic analysis](https://en.wikipedia.org/wiki/Semantic_analysis_(compilers)), which includes operations like: - - Checking whether the query is a valid statement in the SQL language. - - Resolving names, such as the names of tables or variables to their values. - - Eliminating unneeded intermediate computations, e.g., by replacing `0.6 + 0.4` with `1.0`. This is also known as [constant folding](https://en.wikipedia.org/wiki/Constant_folding). - - Finalizing which data types to use for intermediate results, e.g., when a query contains one or more [subqueries](../subqueries.html). - -2. The logical plan is *simplified* using a series of transformations that are always valid. For example, `a BETWEEN b AND c` may be converted to `a >= b AND a <= c`. - -3. The logical plan is *optimized* using a [search algorithm](../cost-based-optimizer.html#how-is-cost-calculated) that evaluates many possible ways to execute a query and selects an execution plan with the least costs. - -The result of the final step above is an optimized logical plan. To view the logical plan generated by the [cost-based optimizer](../cost-based-optimizer.html), use the [`EXPLAIN (OPT)`](../explain.html) statement. - -#### Physical planning - -The physical planning phase decides which nodes will participate in -the execution of the query, based on range locality information. This -is where CockroachDB decides to distribute a query to perform some -computations close to where the data is stored. - -More concretely, the physical planning phase transforms the optimized logical plan generated during [logical planning](#logical-planning) into a [directed acyclic graph](https://en.wikipedia.org/wiki/Directed_acyclic_graph) (DAG) of physical *SQL operators*. These operators can be viewed by running the [`EXPLAIN(DISTSQL)`](../explain.html) statement. - -Because the [distribution layer](distribution-layer.html) presents the abstraction of a single key space, the SQL layer can perform read and write operations for any range on any node. This allows the SQL operators to behave identically whether planned in gateway or distributed mode. - -The decision about whether to distribute a query across multiple nodes is made by a heuristic that estimates the quantity of data that would need to be sent over the network. Queries that only need a small number of rows are executed on the gateway node. Other queries are distributed across multiple nodes. - -For example, when a query is distributed, the physical planning phase splits the scan operations from the logical plan into multiple physical _TableReader_ operators, one for each node containing a range read by the scan. The remaining logical operations (which may perform filters, joins, and aggregations) are then scheduled on the same nodes as the TableReaders. This results in computations being performed as close to the physical data as possible. - -#### Query execution - -Components of the [physical plan](#physical-planning) are sent to one or more nodes for execution. On each node, CockroachDB spawns a *logical processor* to compute a part of the query. Logical processors inside or across nodes communicate with each other over a *logical flow* of data. The combined results of the query are sent back to the first node where the query was received, to be sent further to the SQL client. - -Each processor uses an encoded form for the scalar values manipulated by the query. This is a binary form which is different from that used in SQL. So the values listed in the SQL query must be encoded, and the data communicated between logical processors, and read from disk, must be decoded before it is sent back to the SQL client. - -#### Vectorized query execution - -If [vectorized execution](../vectorized-execution.html) is enabled, the physical plan is sent to nodes to be processed by the vectorized execution engine. - -Upon receiving the physical plan, the vectorized engine reads batches of table data [from disk](storage-layer.html) and converts the data from row format to columnar format. These batches of column data are stored in memory so the engine can access them quickly during execution. - -The vectorized engine uses specialized, precompiled functions that quickly iterate over the type-specific arrays of column data. The columnar output from the functions is stored in memory as the engine processes each column of data. - -After processing all columns of data in the input buffer, the engine converts the columnar output back to row format, and then returns the processed rows to the SQL interface. After a batch of table data has been fully processed, the engine reads the following batch of table data for processing, until the query has been executed. - -### Encoding - -Though SQL queries are written in parsable strings, lower layers of CockroachDB deal primarily in bytes. This means at the SQL layer, in query execution, CockroachDB must convert row data from their SQL representation as strings into bytes, and convert bytes returned from lower layers into SQL data that can be passed back to the client. - -It's also important––for indexed columns––that this byte encoding preserve the same sort order as the data type it represents. This is because of the way CockroachDB ultimately stores data in a sorted key-value map; storing bytes in the same order as the data it represents lets us efficiently scan KV data. - -However, for non-indexed columns (e.g., non-`PRIMARY KEY` columns), CockroachDB instead uses an encoding (known as "value encoding") which consumes less space but does not preserve ordering. - -You can find more exhaustive detail in the [Encoding Tech Note](https://github.com/cockroachdb/cockroach/blob/master/docs/tech-notes/encoding.md). - -### DistSQL - -Because CockroachDB is a distributed database, we've developed a Distributed SQL (DistSQL) optimization tool for some queries, which can dramatically speed up queries that involve many ranges. Though DistSQL's architecture is worthy of its own documentation, this cursory explanation can provide some insight into how it works. - -In non-distributed queries, the coordinating node receives all of the rows that match its query, and then performs any computations on the entire data set. - -However, for DistSQL-compatible queries, each node does computations on the rows it contains, and then sends the results (instead of the entire rows) to the coordinating node. The coordinating node then aggregates the results from each node, and finally returns a single response to the client. - -This dramatically reduces the amount of data brought to the coordinating node, and leverages the well-proven concept of parallel computing, ultimately reducing the time it takes for complex queries to complete. In addition, this processes data on the node that already stores it, which lets CockroachDB handle row-sets that are larger than an individual node's storage. - -To run SQL statements in a distributed fashion, we introduce a couple of concepts: - -- **Logical plan**: Similar to the AST/`planNode` tree described above, it represents the abstract (non-distributed) data flow through computation stages. -- **Physical plan**: A physical plan is conceptually a mapping of the logical plan nodes to physical machines running `cockroach`. Logical plan nodes are replicated and specialized depending on the cluster topology. Like `planNodes` above, these components of the physical plan are scheduled and run on the cluster. - -You can find much greater detail in the [DistSQL RFC](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20160421_distributed_sql.md). - -## Schema changes - -CockroachDB performs schema changes, such as the [addition of columns](../add-column.html) or [secondary indexes](../create-index.html), using a protocol that allows tables to remain online (i.e., able to serve reads and writes) during the schema change. This protocol allows different nodes in the cluster to asynchronously transition to a new table schema at different times. - -The schema change protocol decomposes each schema change into a sequence of incremental changes that will achieve the desired effect. - -For example, the addition of a secondary index requires two intermediate schema versions between the start and end versions to ensure that the index is being updated on writes across the entire cluster before it becomes available for reads. To ensure that the database will remain in a consistent state throughout the schema change, we enforce the invariant that there are at most two successive versions of this schema used in the cluster at all times. - -This approach is based on the paper [_Online, Asynchronous Schema Change in F1_](https://research.google/pubs/pub41376/). - -For more information, including examples and limitations, see [Online Schema Changes](../online-schema-changes.html). - -## Technical interactions with other layers - -### SQL and transaction layer - -KV operations from executed `planNodes` are sent to the transaction layer. - -## What's next? - -Learn how CockroachDB handles concurrent requests in the [transaction layer](transaction-layer.html). diff --git a/src/current/v22.1/architecture/storage-layer.md b/src/current/v22.1/architecture/storage-layer.md deleted file mode 100644 index 102139831dd..00000000000 --- a/src/current/v22.1/architecture/storage-layer.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -title: Storage Layer -summary: The storage layer of CockroachDB's architecture reads and writes data to disk. -toc: true -docs_area: reference.architecture ---- - -The storage layer of CockroachDB's architecture reads and writes data to disk. - -{{site.data.alerts.callout_info}} -If you haven't already, we recommend reading the [Architecture Overview](overview.html). -{{site.data.alerts.end}} - - -## Overview - -Each CockroachDB node contains at least one `store`, specified when the node starts, which is where the `cockroach` process reads and writes its data on disk. - -This data is stored as key-value pairs on disk using the storage engine, which is treated primarily as a black-box API. - -[CockroachDB uses the Pebble storage engine](../cockroach-start.html#storage-engine). Pebble is inspired by RocksDB, but differs in that it: - -- Is written in Go and implements a subset of RocksDB's large feature set. -- Contains optimizations that benefit CockroachDB. - -Internally, each store contains two instances of the storage engine: - -- One for storing temporary distributed SQL data -- One for all other data on the node - -In addition, there is also a block cache shared amongst all of the stores in a node. These stores in turn have a collection of range replicas. More than one replica for a range will never be placed on the same store or even the same node. - -### Interactions with other layers - -In relationship to other layers in CockroachDB, the storage layer: - -- Serves successful reads and writes from the replication layer. - -## Components - -### Pebble - - CockroachDB uses [Pebble](../cockroach-start.html#storage-engine)––an embedded key-value store inspired by RocksDB, and developed by Cockroach Labs––to read and write data to disk. - -Pebble integrates well with CockroachDB for a number of reasons: - -- It is a key-value store, which makes mapping to our key-value layer simple -- It provides atomic write batches and snapshots, which give us a subset of transactions -- It is developed by Cockroach Labs engineers -- It contains optimizations that are not in RocksDB, that are inspired by how CockroachDB uses the storage engine. For an example of such an optimization, see the blog post [Faster Bulk-Data Loading in CockroachDB](https://www.cockroachlabs.com/blog/bulk-data-import/). - -Efficient storage for the keys is guaranteed by the underlying Pebble engine by means of prefix compression. - -For more information about Pebble, see the [Pebble GitHub page](https://github.com/cockroachdb/pebble) or the blog post [Introducing Pebble: A RocksDB Inspired Key-Value Store Written in Go](https://www.cockroachlabs.com/blog/pebble-rocksdb-kv-store/). - -Pebble uses a Log-structured Merge-tree (_LSM_) to manage data storage. For more information about how LSM-based storage engines like Pebble work, see [log-structured merge-trees](#log-structured-merge-trees) below. - -#### Log-structured Merge-trees - -Pebble uses a Log-structured Merge-tree (hereafter _LSM tree_ or _LSM_) to manage data storage. The LSM is a hierarchical tree. At each level of the tree, there are files on disk that store the data referenced at that level. The files are known as _sorted string table_ files (hereafter _SST_ or _SST file_). - -##### SSTs - -SSTs are an on-disk representation of sorted lists of key-value pairs. Conceptually, they look something like this (intentionally simplified) diagram: - -Structure of an SST file - -SST files are immutable; they are never modified, even during the [compaction process](#compaction). - -##### LSM levels - -The levels of the LSM are organized from L0 to L6. L0 is the top-most level. L6 is the bottom-most level. New data is added into L0 (e.g., using [`INSERT`](../insert.html) or [`IMPORT`](../import.html)) and then merged down into lower levels over time. - -The diagram below shows what an LSM looks like at a high level. Each level is associated with a set of SSTs. Each SST is immutable and has a unique, monotonically increasing number. - -The SSTs within each level are guaranteed to be non-overlapping: for example, if one SST contains the keys `[A-F)` (noninclusive), the next will contain keys `[F-R)`, and so on. The L0 level is a special case: it is the only level of the tree that is allowed to contain SSTs with overlapping keys. This exception to the rule is necessary for the following reasons: - -- To allow LSM-based storage engines like Pebble to support ingesting large amounts of data, such as when using the [`IMPORT`](../import.html) statement. -- To allow for easier and more efficient flushes of [memtables](#memtable-and-write-ahead-log). - -LSM tree with SST files - -##### Compaction - -The process of merging SSTs and moving them from L0 down to L6 in the LSM is called _compaction_. The storage engine works to compact data as quickly as possible. As a result of this process, lower levels of the LSM should contain larger SSTs that contain less recently updated keys, while higher levels of the LSM should contain smaller SSTs that contain more recently updated keys. - -The compaction process is necessary in order for an LSM to work efficiently; from L0 down to L6, each level of the tree should have about 1/10 as much data as the next level below. E.g., L1 should have about 1/10 as much data as L2, and so on. Ideally as much of the data as possible will be stored in larger SSTs referenced at lower levels of the LSM. If the compaction process falls behind, it can result in an [inverted LSM](#inverted-lsms). - -SST files are never modified during the compaction process. Instead, new SSTs are written, and old SSTs are deleted. This design takes advantage of the fact that sequential disk access is much, much faster than random disk access. - -The process of compaction works like this: if two SST files _A_ and _B_ need to be merged, their contents (key-value pairs) are read into memory. From there, the contents are sorted and merged together in memory, and a new file _C_ is opened and written to disk with the new, larger sorted list of key-value pairs. This step is conceptually similar to a [merge sort](https://en.wikipedia.org/wiki/Merge_sort). Finally, the old files _A_ and _B_ are deleted. - -##### Inverted LSMs - -If the compaction process falls behind the amount of data being added, and there is more data stored at a higher level of the tree than the level below, the LSM shape can become inverted. - -During normal operation, the LSM should look like this: ◣. An inverted LSM looks like this: ◤. - -An inverted LSM will have degraded read performance. - - - -Read amplification is high when the LSM is inverted. In the inverted LSM state, reads need to start in higher levels and "look down" through a lot of SSTs to read a key's correct (freshest) value. When the storage engine needs to read from multiple SST files in order to service a single logical read, this state is known as _read amplification_. - -Read amplification can be especially bad if a large [`IMPORT`](../import.html) is overloading the cluster (due to insufficient CPU and/or IOPS) and the storage engine has to consult many small SSTs in L0 to determine the most up-to-date value of the keys being read (e.g., using a [`SELECT`](../select-clause.html)). - -A certain amount of read amplification is expected in a normally functioning CockroachDB cluster. For example, a read amplification factor less than 10 as shown in the [**Read Amplification** graph on the **Storage** dashboard](../ui-storage-dashboard.html#other-graphs) is considered healthy. - - - -_Write amplification_ is more complicated than read amplification, but can be defined broadly as: "how many physical files am I rewriting during compactions?" For example, if the storage engine is doing a lot of [compactions](#compaction) in L5, it will be rewriting SST files in L5 over and over again. This is a tradeoff, since if the engine doesn't perform compactions often enough, the size of L0 will get too large, and an inverted LSM will result, which also has ill effects. - -Read amplification and write amplification are key metrics for LSM performance. Neither is inherently "good" or "bad", but they must not occur in excess, and for optimum performance they must be kept in balance. That balance involves tradeoffs. - -Inverted LSMs also have excessive compaction debt. In this state, the storage engine has a large backlog of [compactions](#compaction) to do to return the inverted LSM to a normal, non-inverted state. - -For instructions showing how to monitor your cluster's LSM health, see [LSM Health](../common-issues-to-monitor.html#lsm-health). To monitor your cluster's LSM L0 health, see [LSM L0 Health](../ui-overload-dashboard.html#lsm-l0-health). - -##### Memtable and write-ahead log - -To facilitate managing the LSM tree structure, the storage engine maintains an in-memory representation of the LSM known as the _memtable_; periodically, data from the memtable is flushed to SST files on disk. - -Another file on disk called the write-ahead log (hereafter _WAL_) is associated with each memtable to ensure durability in case of power loss or other failures. The WAL is where the freshest updates issued to the storage engine by the [replication layer](replication-layer.html) are stored on disk. Each WAL has a 1 to 1 correspondence with a memtable; they are kept in sync, and updates from the WAL and memtable are written to SSTs periodically as part of the storage engine's normal operation. - -The relationship between the memtable, the WAL, and the SST files is shown in the diagram below. New values are written to the WAL at the same time as they are written to the memtable. From the memtable they are eventually written to SST files on disk for longer-term storage. - -Relationship between memtable, WAL, and SSTs - -##### LSM design tradeoffs - -The LSM tree design optimizes write performance over read performance. By keeping sorted key-value data in SSTs, it avoids random disk seeks when writing. It tries to mitigate the cost of reads (random seeks) by doing reads from as low in the LSM tree as possible, from fewer, larger files. This is why the storage engine performs compactions. The storage engine also uses a block cache to speed up reads even further whenever possible. - -The tradeoffs in the LSM design are meant to take advantage of the way modern disks work, since even though they provide faster reads of random locations on disk due to caches, they still perform relatively poorly on writes to random locations. - -### MVCC - -CockroachDB relies heavily on [multi-version concurrency control (MVCC)](https://en.wikipedia.org/wiki/Multiversion_concurrency_control) to process concurrent requests and guarantee consistency. Much of this work is done by using [hybrid logical clock (HLC) timestamps](transaction-layer.html#time-and-hybrid-logical-clocks) to differentiate between versions of data, track commit timestamps, and identify a value's garbage collection expiration. All of this MVCC data is then stored in Pebble. - -Despite being implemented in the storage layer, MVCC values are widely used to enforce consistency in the [transaction layer](transaction-layer.html). For example, CockroachDB maintains a [timestamp cache](transaction-layer.html#timestamp-cache), which stores the timestamp of the last time that the key was read. If a write operation occurs at a lower timestamp than the largest value in the read timestamp cache, it signifies there’s a potential anomaly and the transaction must be restarted at a later timestamp. - -#### Time-travel - -As described in the [SQL:2011 standard](https://en.wikipedia.org/wiki/SQL:2011#Temporal_support), CockroachDB supports time travel queries (enabled by MVCC). - -To do this, all of the schema information also has an MVCC-like model behind it. This lets you perform `SELECT...AS OF SYSTEM TIME`, and CockroachDB uses the schema information as of that time to formulate the queries. - -Using these tools, you can get consistent data from your database as far back as your garbage collection period. - -### Garbage collection - -CockroachDB regularly garbage collects MVCC values to reduce the size of data stored on disk. To do this, we compact old MVCC values when there is a newer MVCC value with a timestamp that's older than the garbage collection period. The garbage collection period can be set at the cluster, database, or table level by configuring the [`gc.ttlseconds` replication zone variable](../configure-replication-zones.html#gc-ttlseconds). For more information about replication zones, see [Configure Replication Zones](../configure-replication-zones.html). - -#### Protected timestamps - -Garbage collection can only run on MVCC values which are not covered by a *protected timestamp*. The protected timestamp subsystem exists to ensure the safety of operations that rely on historical data, such as: - -- [Backups](../backup.html) -- [Changefeeds](../change-data-capture-overview.html) - -Protected timestamps ensure the safety of historical data while also enabling shorter [GC TTLs](../configure-replication-zones.html#gc-ttlseconds). A shorter GC TTL means that fewer previous MVCC values are kept around. This can help lower query execution costs for workloads which update rows frequently throughout the day, since [the SQL layer](sql-layer.html) has to scan over previous MVCC values to find the current value of a row. - -##### How protected timestamps work - -Protected timestamps work by creating *protection records*, which are stored in an internal system table. When a long-running job such as a backup wants to protect data at a certain timestamp from being garbage collected, it creates a protection record associated with that data and timestamp. - -Upon successful creation of a protection record, the MVCC values for the specified data at timestamps less than or equal to the protected timestamp will not be garbage collected. When the job that created the protection record finishes its work, it removes the record, allowing the garbage collector to run on the formerly protected values. - -## Interactions with other layers - -### Storage and replication layers - -The storage layer commits writes from the Raft log to disk, as well as returns requested data (i.e., reads) to the replication layer. - -## What's next? - -Now that you've learned about our architecture, [start up a CockroachDB {{ site.data.products.serverless }} cluster](../../cockroachcloud/quickstart.html) or [local cluster](../install-cockroachdb.html) and start [building an app with CockroachDB](../example-apps.html). diff --git a/src/current/v22.1/architecture/transaction-layer.md b/src/current/v22.1/architecture/transaction-layer.md deleted file mode 100644 index 63e8d61d0d0..00000000000 --- a/src/current/v22.1/architecture/transaction-layer.md +++ /dev/null @@ -1,442 +0,0 @@ ---- -title: Transaction Layer -summary: The transaction layer of CockroachDB's architecture implements support for ACID transactions by coordinating concurrent operations. -toc: true -docs_area: reference.architecture ---- - -The transaction layer of CockroachDB's architecture implements support for ACID transactions by coordinating concurrent operations. - -{{site.data.alerts.callout_info}} -If you haven't already, we recommend reading the [Architecture Overview](overview.html). -{{site.data.alerts.end}} - -## Overview - -Above all else, CockroachDB believes consistency is the most important feature of a database––without it, developers cannot build reliable tools, and businesses suffer from potentially subtle and hard to detect anomalies. - -To provide consistency, CockroachDB implements full support for ACID transaction semantics in the transaction layer. However, it's important to realize that *all* statements are handled as transactions, including single statements––this is sometimes referred to as "autocommit mode" because it behaves as if every statement is followed by a `COMMIT`. - -For code samples of using transactions in CockroachDB, see our documentation on [transactions](../transactions.html#sql-statements). - -Because CockroachDB enables transactions that can span your entire cluster (including cross-range and cross-table transactions), it achieves correctness using a distributed, atomic commit protocol called [Parallel Commits](#parallel-commits). - -### Writes and reads (phase 1) - -#### Writing - -When the transaction layer executes write operations, it doesn't directly write values to disk. Instead, it creates several things that help it mediate a distributed transaction: - -- **Locks** for all of a transaction’s writes, which represent a provisional, uncommitted state. CockroachDB has several different types of locking: - - - **Unreplicated Locks** are stored in an in-memory, per-node lock table by the [concurrency control](#concurrency-control) machinery. These locks are not replicated via [Raft](replication-layer.html#raft). - - - **Replicated Locks** (also known as [write intents](#write-intents)) are replicated via [Raft](replication-layer.html#raft), and act as a combination of a provisional value and an exclusive lock. They are essentially the same as standard [multi-version concurrency control (MVCC)](storage-layer.html#mvcc) values but also contain a pointer to the [transaction record](#transaction-records) stored on the cluster. - -- A **transaction record** stored in the range where the first write occurs, which includes the transaction's current state (which is either `PENDING`, `STAGING`, `COMMITTED`, or `ABORTED`). - -As write intents are created, CockroachDB checks for newer committed values. If newer committed values exist, the transaction may be restarted. If existing write intents for the same keys exist, it is resolved as a [transaction conflict](#transaction-conflicts). - -If transactions fail for other reasons, such as failing to pass a SQL constraint, the transaction is aborted. - -#### Reading - -If the transaction has not been aborted, the transaction layer begins executing read operations. If a read only encounters standard MVCC values, everything is fine. However, if it encounters any write intents, the operation must be resolved as a [transaction conflict](#transaction-conflicts). - -CockroachDB provides the following types of reads: - -- Strongly-consistent (aka "non-stale") reads: These are the default and most common type of read. These reads go through the [leaseholder](replication-layer.html#leases) and see all writes performed by writers that committed before the reading transaction started. They always return data that is correct and up-to-date. -- Stale reads: These are useful in situations where you can afford to read data that is slightly stale in exchange for faster reads. They can only be used in read-only transactions that use the [`AS OF SYSTEM TIME`](../as-of-system-time.html) clause. They do not need to go through the leaseholder, since they ensure consistency by reading from a local replica at a timestamp that is never higher than the [closed timestamp](#closed-timestamps). For more information about how to use stale reads from SQL, see [Follower Reads](../follower-reads.html). - -### Commits (phase 2) - -CockroachDB checks the running transaction's record to see if it's been `ABORTED`; if it has, it restarts the transaction. - -In the common case, it sets the transaction record's state to `STAGING`, and checks the transaction's pending write intents to see if they have succeeded (i.e., been replicated across the cluster). - -If the transaction passes these checks, CockroachDB responds with the transaction's success to the client, and moves on to the cleanup phase. At this point, the transaction is committed, and the client is free to begin sending more requests to the cluster. - -For a more detailed tutorial of the commit protocol, see [Parallel Commits](#parallel-commits). - -### Cleanup (asynchronous phase 3) - -After the transaction has been committed, it should be marked as such, and all of the write intents should be resolved. To do this, the coordinating node––which kept a track of all of the keys it wrote––reaches out and: - -- Moves the state of the transaction record from `STAGING` to `COMMITTED`. -- Resolves the transaction's write intents to MVCC values by removing the element that points it to the transaction record. -- Deletes the write intents. - -This is simply an optimization, though. If operations in the future encounter write intents, they always check their transaction records––any operation can resolve or remove write intents by checking the transaction record's status. - -### Interactions with other layers - -In relationship to other layers in CockroachDB, the transaction layer: - -- Receives KV operations from the SQL layer. -- Controls the flow of KV operations sent to the distribution layer. - -## Technical details and components - -### Time and hybrid logical clocks - -In distributed systems, ordering and causality are difficult problems to solve. While it's possible to rely entirely on Raft consensus to maintain serializability, it would be inefficient for reading data. To optimize performance of reads, CockroachDB implements hybrid-logical clocks (HLC) which are composed of a physical component (always close to local wall time) and a logical component (used to distinguish between events with the same physical component). This means that HLC time is always greater than or equal to the wall time. You can find more detail in the [HLC paper](http://www.cse.buffalo.edu/tech-reports/2014-04.pdf). - -In terms of transactions, the gateway node picks a timestamp for the transaction using HLC time. Whenever a transaction's timestamp is mentioned, it's an HLC value. This timestamp is used to both track versions of values (through [multi-version concurrency control](storage-layer.html#mvcc)), as well as provide our transactional isolation guarantees. - -When nodes send requests to other nodes, they include the timestamp generated by their local HLCs (which includes both physical and logical components). When nodes receive requests, they inform their local HLC of the timestamp supplied with the event by the sender. This is useful in guaranteeing that all data read/written on a node is at a timestamp less than the next HLC time. - -This then lets the node primarily responsible for the range (i.e., the leaseholder) serve reads for data it stores by ensuring the transaction reading the data is at an HLC time greater than the MVCC value it's reading (i.e., the read always happens "after" the write). - -#### Max clock offset enforcement - -CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the [maximum offset allowed](../cockroach-start.html#flags-max-offset), **it crashes immediately**. - -While [serializable consistency](https://en.wikipedia.org/wiki/Serializability) is maintained regardless of clock skew, skew outside the configured clock offset bounds can result in violations of single-key linearizability between causally dependent transactions. It's therefore important to prevent clocks from drifting too far by running [NTP](http://www.ntp.org/) or other clock synchronization software on each node. - -For more detail about the risks that large clock offsets can cause, see [What happens when node clocks are not properly synchronized?](../operational-faqs.html#what-happens-when-node-clocks-are-not-properly-synchronized) - -### Timestamp cache - -As part of providing serializability, whenever an operation reads a value, we store the operation's timestamp in a timestamp cache, which shows the high-water mark for values being read. - -The timestamp cache is a data structure used to store information about the reads performed by [leaseholders](replication-layer.html#leases). This is used to ensure that once some transaction *t1* reads a row, another transaction *t2* that comes along and tries to write to that row will be ordered after *t1*, thus ensuring a serial order of transactions, aka serializability. - -Whenever a write occurs, its timestamp is checked against the timestamp cache. If the timestamp is less than the timestamp cache's latest value, we attempt to push the timestamp for its transaction forward to a later time. Pushing the timestamp might cause the transaction to restart in the second phase of the transaction (see [read refreshing](#read-refreshing)). - -### Closed timestamps - -Each CockroachDB range tracks a property called its _closed timestamp_, which means that no new writes can ever be introduced at or below that timestamp. The closed timestamp is advanced continuously on the leaseholder, and lags the current time by some target interval. As the closed timestamp is advanced, notifications are sent to each follower. If a range receives a write at a timestamp less than or equal to its closed timestamp, the write is forced to change its timestamp, which might result in a transaction retry error (see [read refreshing](#read-refreshing)). - -In other words, a closed timestamp is a promise by the range's [leaseholder](replication-layer.html#leases) to its follower replicas that it will not accept writes below that timestamp. Generally speaking, the leaseholder continuously closes timestamps a few seconds in the past. - -The closed timestamps subsystem works by propagating information from leaseholders to followers by piggybacking closed timestamps onto Raft commands such that the replication stream is synchronized with timestamp closing. This means that a follower replica can start serving reads with timestamps at or below the closed timestamp as soon as it has applied all of the Raft commands up to the position in the [Raft log](replication-layer.html#raft-logs) specified by the leaseholder. - -Once the follower replica has applied the abovementioned Raft commands, it has all the data necessary to serve reads with timestamps less than or equal to the closed timestamp. - -Note that closed timestamps are valid even if the leaseholder changes, since they are preserved across [lease transfers](replication-layer.html#epoch-based-leases-table-data). Once a lease transfer occurs, the new leaseholder will not break the closed timestamp promise made by the old leaseholder. - -Closed timestamps provide the guarantees that are used to provide support for low-latency historical (stale) reads, also known as [Follower Reads](../follower-reads.html). Follower reads can be particularly useful in [multi-region deployments](../multiregion-overview.html). - -For more information about the implementation of closed timestamps and Follower Reads, see our blog post [An Epic Read on Follower Reads](https://www.cockroachlabs.com/blog/follower-reads-stale-data/). - -### client.Txn and TxnCoordSender - -As we mentioned in the SQL layer's architectural overview, CockroachDB converts all SQL statements into key-value (KV) operations, which is how data is ultimately stored and accessed. - -All of the KV operations generated from the SQL layer use `client.Txn`, which is the transactional interface for the CockroachDB KV layer––but, as we discussed above, all statements are treated as transactions, so all statements use this interface. - -However, `client.Txn` is actually just a wrapper around `TxnCoordSender`, which plays a crucial role in our code base by: - -- Dealing with transactions' state. After a transaction is started, `TxnCoordSender` starts asynchronously sending heartbeat messages to that transaction's transaction record, which signals that it should be kept alive. If the `TxnCoordSender`'s heartbeating stops, the transaction record is moved to the `ABORTED` status. -- Tracking each written key or key range over the course of the transaction. -- Clearing the accumulated write intent for the transaction when it's committed or aborted. All requests being performed as part of a transaction have to go through the same `TxnCoordSender` to account for all of its write intents, which optimizes the cleanup process. - -After setting up this bookkeeping, the request is passed to the `DistSender` in the distribution layer. - -### Transaction records - -To track the status of a transaction's execution, we write a value called a transaction record to our key-value store. All of a transaction's write intents point back to this record, which lets any transaction check the status of any write intents it encounters. This kind of canonical record is crucial for supporting concurrency in a distributed environment. - -Transaction records are always written to the same range as the first key in the transaction, which is known by the `TxnCoordSender`. However, the transaction record itself isn't created until one of the following conditions occur: - -- The write operation commits -- The `TxnCoordSender` heartbeats the transaction -- An operation forces the transaction to abort - -Given this mechanism, the transaction record uses the following states: - -- `PENDING`: Indicates that the write intent's transaction is still in progress. -- `COMMITTED`: Once a transaction has completed, this status indicates that write intents can be treated as committed values. -- `STAGING`: Used to enable the [Parallel Commits](#parallel-commits) feature. Depending on the state of the write intents referenced by this record, the transaction may or may not be in a committed state. -- `ABORTED`: Indicates that the transaction was aborted and its values should be discarded. -- _Record does not exist_: If a transaction encounters a write intent whose transaction record doesn't exist, it uses the write intent's timestamp to determine how to proceed. If the write intent's timestamp is within the transaction liveness threshold, the write intent's transaction is treated as if it is `PENDING`, otherwise it's treated as if the transaction is `ABORTED`. - -The transaction record for a committed transaction remains until all its write intents are converted to MVCC values. - -### Write intents - -Values in CockroachDB are not written directly to the storage layer; instead values are written in a provisional state known as a "write intent." These are essentially MVCC records with an additional value added to them which identifies the transaction record to which the value belongs. They can be thought of as a combination of a replicated lock and a replicated provisional value. - -Whenever an operation encounters a write intent (instead of an MVCC value), it looks up the status of the transaction record to understand how it should treat the write intent value. If the transaction record is missing, the operation checks the write intent's timestamp and evaluates whether or not it is considered expired. - - CockroachDB manages concurrency control using a per-node, in-memory lock table. This table holds a collection of locks acquired by in-progress transactions, and incorporates information about write intents as they are discovered during evaluation. For more information, see the section below on [Concurrency control](#concurrency-control). - -#### Resolving write intents - -Whenever an operation encounters a write intent for a key, it attempts to "resolve" it, the result of which depends on the write intent's transaction record: - -- `COMMITTED`: The operation reads the write intent and converts it to an MVCC value by removing the write intent's pointer to the transaction record. -- `ABORTED`: The write intent is ignored and deleted. -- `PENDING`: This signals there is a [transaction conflict](#transaction-conflicts), which must be resolved. -- `STAGING`: This signals that the operation should check whether the staging transaction is still in progress by verifying that the transaction coordinator is still heartbeating the staging transaction’s record. If the coordinator is still heartbeating the record, the operation should wait. For more information, see [Parallel Commits](#parallel-commits). -- _Record does not exist_: If the write intent was created within the transaction liveness threshold, it's the same as `PENDING`, otherwise it's treated as `ABORTED`. - -### Concurrency control - - The *concurrency manager* sequences incoming requests and provides isolation between the transactions that issued those requests that intend to perform conflicting operations. This activity is also known as [concurrency control](https://en.wikipedia.org/wiki/Concurrency_control). - -The concurrency manager combines the operations of a *latch manager* and a *lock table* to accomplish this work: - -- The *latch manager* sequences the incoming requests and provides isolation between those requests. -- The *lock table* provides both locking and sequencing of requests (in concert with the latch manager). It is a per-node, in-memory data structure that holds a collection of locks acquired by in-progress transactions. To ensure compatibility with the existing system of [write intents](#write-intents) (a.k.a. replicated, exclusive locks), it pulls in information about these external locks as necessary when they are discovered in the course of evaluating requests. - -The concurrency manager enables support for pessimistic locking via [SQL](sql-layer.html) using the [`SELECT FOR UPDATE`](../select-for-update.html) statement. This statement can be used to increase throughput and decrease tail latency for contended operations. - -For more details about how the concurrency manager works with the latch manager and lock table, see the sections below: - -- [Concurrency manager](#concurrency-manager) -- [Lock table](#lock-table) -- [Latch manager](#latch-manager) - -#### Concurrency manager - - The concurrency manager is a structure that sequences incoming requests and provides isolation between the transactions that issued those requests that intend to perform conflicting operations. During sequencing, conflicts are discovered and any found are resolved through a combination of passive queuing and active pushing. Once a request has been sequenced, it is free to evaluate without concerns of conflicting with other in-flight requests due to the isolation provided by the manager. This isolation is guaranteed for the lifetime of the request but terminates once the request completes. - -Each request in a transaction should be isolated from other requests, both during the request's lifetime and after the request has completed (assuming it acquired locks), but within the surrounding transaction's lifetime. - -The manager accommodates this by allowing transactional requests to acquire locks, which outlive the requests themselves. Locks extend the duration of the isolation provided over specific keys to the lifetime of the lock-holder transaction itself. They are (typically) only released when the transaction commits or aborts. Other requests that find these locks while being sequenced wait on them to be released in a queue before proceeding. Because locks are checked during sequencing, locks do not need to be checked again during evaluation. - -However, not all locks are stored directly under the manager's control, so not all locks are discoverable during sequencing. Specifically, write intents (replicated, exclusive locks) are stored inline in the MVCC keyspace, so they are not detectable until request evaluation time. To accommodate this form of lock storage, the manager integrates information about external locks with the concurrency manager structure. - - - -{{site.data.alerts.callout_info}} -The concurrency manager operates on an unreplicated lock table structure. Unreplicated locks are held only on a single replica in a [range](overview.html#architecture-range), which is typically the [leaseholder](replication-layer.html#leases). They are very fast to acquire and release, but provide no guarantee of survivability across [lease transfers or leaseholder crashes](replication-layer.html#how-leases-are-transferred-from-a-dead-node). - -This lack of survivability for unreplicated locks affects SQL statements implemented using them, such as [`SELECT ... FOR UPDATE`](../select-for-update.html#known-limitations). - -In the future, we intend to pull all locks, including those associated with [write intents](#write-intents), into the concurrency manager directly through a replicated lock table structure. -{{site.data.alerts.end}} - -Fairness is ensured between requests. In general, if any two requests conflict then the request that arrived first will be sequenced first. As such, sequencing guarantees first-in, first-out (FIFO) semantics. The primary exception to this is that a request that is part of a transaction which has already acquired a lock does not need to wait on that lock during sequencing, and can therefore ignore any queue that has formed on the lock. For other exceptions to this sequencing guarantee, see the [lock table](#lock-table) section below. - -#### Lock table - - The lock table is a per-node, in-memory data structure that holds a collection of locks acquired by in-progress transactions. Each lock in the table has a possibly-empty lock wait-queue associated with it, where conflicting transactions can queue while waiting for the lock to be released. Items in the locally stored lock wait-queue are propagated as necessary (via RPC) to the existing [`TxnWaitQueue`](#txnwaitqueue), which is stored on the leader of the range's Raft group that contains the [transaction record](#transaction-records). - -The database is read and written using "requests". Transactions are composed of one or more requests. Isolation is needed across requests. Additionally, since transactions represent a group of requests, isolation is needed across such groups. Part of this isolation is accomplished by maintaining multiple versions and part by allowing requests to acquire locks. Even the isolation based on multiple versions requires some form of mutual exclusion to ensure that a read and a conflicting lock acquisition do not happen concurrently. The lock table provides both locking and sequencing of requests (in concert with the use of latches). - -Locks outlive the requests themselves and thereby extend the duration of the isolation provided over specific keys to the lifetime of the lock-holder transaction itself. They are (typically) only released when the transaction commits or aborts. Other requests that find these locks while being sequenced wait on them to be released in a queue before proceeding. Because locks are checked during sequencing, requests are guaranteed access to all declared keys after they have been sequenced. In other words, locks do not need to be checked again during evaluation. - -{{site.data.alerts.callout_info}} -Currently, not all locks are stored directly under lock table control. Some locks are stored as [write intents](#write-intents) in the MVCC layer, and are thus not discoverable during sequencing. Specifically, write intents (replicated, exclusive locks) are stored inline in the MVCC keyspace, so they are often not detectable until request evaluation time. To accommodate this form of lock storage, the lock table adds information about these locks as they are encountered during evaluation. In the future, we intend to pull all locks, including those associated with write intents, into the lock table directly. -{{site.data.alerts.end}} - -The lock table also provides fairness between requests. If two requests conflict then the request that arrived first will typically be sequenced first. There are some exceptions: - -- A request that is part of a transaction which has already acquired a lock does not need to wait on that lock during sequencing, and can therefore ignore any queue that has formed on the lock. - -- Contending requests that encounter different levels of contention may be sequenced in non-FIFO order. This is to allow for greater concurrency. For example, if requests *R1* and *R2* contend on key *K2*, but *R1* is also waiting at key *K1*, *R2* may slip past *R1* and evaluate. - -#### Latch manager - -The latch manager sequences incoming requests and provides isolation between those requests under the supervision of the [concurrency manager](#concurrency-manager). - -The way the latch manager works is as follows: - -As write requests occur for a range, the range's leaseholder serializes them; that is, they are placed into some consistent order. - -To enforce this serialization, the leaseholder creates a "latch" for the keys in the write value, providing uncontested access to the keys. If other requests come into the leaseholder for the same set of keys, they must wait for the latch to be released before they can proceed. - -Read requests also generate latches. Multiple read latches over the same keys can be held concurrently, but a read latch and a write latch over the same keys cannot. - -Another way to think of a latch is like a [mutex](https://en.wikipedia.org/wiki/Lock_(computer_science)) which is only needed for the duration of a single, low-level request. To coordinate longer-running, higher-level requests (i.e., client transactions), we use a durable system of [write intents](#write-intents). - -### Isolation levels - -Isolation is an element of [ACID transactions](https://en.wikipedia.org/wiki/ACID), which determines how concurrency is controlled, and ultimately guarantees consistency. - -CockroachDB executes all transactions at the strongest ANSI transaction isolation level: `SERIALIZABLE`. All other ANSI transaction isolation levels (e.g., `SNAPSHOT`, `READ UNCOMMITTED`, `READ COMMITTED`, and `REPEATABLE READ`) are automatically upgraded to `SERIALIZABLE`. Weaker isolation levels have historically been used to maximize transaction throughput. However, [recent research](http://www.bailis.org/papers/acidrain-sigmod2017.pdf) has demonstrated that the use of weak isolation levels results in substantial vulnerability to concurrency-based attacks. - -CockroachDB now only supports `SERIALIZABLE` isolation. In previous versions of CockroachDB, you could set transactions to `SNAPSHOT` isolation, but that feature has been removed. - -`SERIALIZABLE` isolation does not allow any anomalies in your data, and is enforced by requiring the client to retry transactions if serializability violations are possible. - -### Transaction conflicts - -CockroachDB's transactions allow the following types of conflicts that involve running into an intent: - -- **Write/write**, where two `PENDING` transactions create write intents for the same key. -- **Write/read**, when a read encounters an existing write intent with a timestamp less than its own. - -To make this simpler to understand, we'll call the first transaction `TxnA` and the transaction that encounters its write intents `TxnB`. - -CockroachDB proceeds through the following steps: - -1. If the transaction has an explicit priority set (i.e., `HIGH` or `LOW`), the transaction with the lower priority is aborted (in the write/write case) or has its timestamp pushed (in the write/read case). - -1. If the encountered transaction is expired, it's `ABORTED` and conflict resolution succeeds. We consider a write intent expired if: - - It doesn't have a transaction record and its timestamp is outside of the transaction liveness threshold. - - Its transaction record hasn't been heartbeated within the transaction liveness threshold. - -2. `TxnB` enters the `TxnWaitQueue` to wait for `TxnA` to complete. - -Additionally, the following types of conflicts that do not involve running into intents can arise: - -- **Write after read**, when a write with a lower timestamp encounters a later read. This is handled through the [timestamp cache](#timestamp-cache). -- **Read within uncertainty window**, when a read encounters a value with a higher timestamp but it's ambiguous whether the value should be considered to be in the future or in the past of the transaction because of possible *clock skew*. This is handled by attempting to push the transaction's timestamp beyond the uncertain value (see [read refreshing](#read-refreshing)). Note that, if the transaction has to be retried, reads will never encounter uncertainty issues on any node which was previously visited, and that there's never any uncertainty on values read from the transaction's gateway node. - -### TxnWaitQueue - -The `TxnWaitQueue` tracks all transactions that could not push a transaction whose writes they encountered, and must wait for the blocking transaction to complete before they can proceed. - -The `TxnWaitQueue`'s structure is a map of blocking transaction IDs to those they're blocking. For example: - -~~~ -txnA -> txn1, txn2 -txnB -> txn3, txn4, txn5 -~~~ - -Importantly, all of this activity happens on a single node, which is the leader of the range's Raft group that contains the transaction record. - -Once the transaction does resolve––by committing or aborting––a signal is sent to the `TxnWaitQueue`, which lets all transactions that were blocked by the resolved transaction begin executing. - -Blocked transactions also check the status of their own transaction to ensure they're still active. If the blocked transaction was aborted, it's simply removed. - -If there is a deadlock between transactions (i.e., they're each blocked by each other's Write Intents), one of the transactions is randomly aborted. In the above example, this would happen if `TxnA` blocked `TxnB` on `key1` and `TxnB` blocked `TxnA` on `key2`. - -### Read refreshing - -Whenever a transaction's timestamp has been pushed, additional checks are required before allowing it to commit at the pushed timestamp: any values which the transaction previously read must be checked to verify that no writes have subsequently occurred between the original transaction timestamp and the pushed transaction timestamp. This check prevents serializability violation. The check is done by keeping track of all the reads using a dedicated `RefreshRequest`. If this succeeds, the transaction is allowed to commit (transactions perform this check at commit time if they've been pushed by a different transaction or by the [timestamp cache](#timestamp-cache), or they perform the check whenever they encounter a [`ReadWithinUncertaintyIntervalError`](../transaction-retry-error-reference.html#readwithinuncertaintyinterval) immediately, before continuing). -If the refreshing is unsuccessful, then the transaction must be retried at the pushed timestamp. - -### Transaction pipelining - -Transactional writes are pipelined when being replicated and when being written to disk, dramatically reducing the latency of transactions that perform multiple writes. For example, consider the following transaction: - -{% include_cached copy-clipboard.html %} -~~~ sql --- CREATE TABLE kv (id UUID PRIMARY KEY DEFAULT gen_random_uuid(), key VARCHAR, value VARCHAR); -> BEGIN; -INSERT into kv (key, value) VALUES ('apple', 'red'); -INSERT into kv (key, value) VALUES ('banana', 'yellow'); -INSERT into kv (key, value) VALUES ('orange', 'orange'); -COMMIT; -~~~ - -With transaction pipelining, write intents are replicated from leaseholders in parallel, so the waiting all happens at the end, at transaction commit time. - -At a high level, transaction pipelining works as follows: - -1. For each statement, the transaction gateway node communicates with the leaseholders (*L*1, *L*2, *L*3, ..., *L*i) for the ranges it wants to write to. Since the primary keys in the table above are UUIDs, the ranges are probably split across multiple leaseholders (this is a good thing, as it decreases [transaction conflicts](#transaction-conflicts)). - -2. Each leaseholder *L*i receives the communication from the transaction gateway node and does the following in parallel: - - Creates write intents and sends them to its follower nodes. - - Responds to the transaction gateway node that the write intents have been sent. Note that replication of the intents is still in-flight at this stage. - -3. When attempting to commit, the transaction gateway node then waits for the write intents to be replicated in parallel to all of the leaseholders' followers. When it receives responses from the leaseholders that the write intents have propagated, it commits the transaction. - -In terms of the SQL snippet shown above, all of the waiting for write intents to propagate and be committed happens once, at the very end of the transaction, rather than for each individual write. This means that the cost of multiple writes is not `O(n)` in the number of SQL DML statements; instead, it's `O(1)`. - -### Parallel Commits - -*Parallel Commits* is an optimized atomic commit protocol that cuts the commit latency of a transaction in half, from two rounds of consensus down to one. Combined with [transaction pipelining](#transaction-pipelining), this brings the latency incurred by common OLTP transactions to near the theoretical minimum: the sum of all read latencies plus one round of consensus latency. - -Under this atomic commit protocol, the transaction coordinator can return to the client eagerly when it knows that the writes in the transaction have succeeded. Once this occurs, the transaction coordinator can set the transaction record's state to `COMMITTED` and resolve the transaction's write intents asynchronously. - -The transaction coordinator is able to do this while maintaining correctness guarantees because it populates the transaction record with enough information (via a new `STAGING` state, and an array of in-flight writes) for other transactions to determine whether all writes in the transaction are present, and thus prove whether or not the transaction is committed. - -For an example showing how Parallel Commits works in more detail, see [Parallel Commits - step by step](#parallel-commits-step-by-step). - -{{site.data.alerts.callout_info}} -The latency until intents are resolved is unchanged by the introduction of Parallel Commits: two rounds of consensus are still required to resolve intents. This means that [contended workloads](../performance-best-practices-overview.html#transaction-contention) are expected to profit less from this feature. -{{site.data.alerts.end}} - -#### Parallel Commits - step by step - -This section contains a step-by-step example of a transaction that writes its data using the Parallel Commits atomic commit protocol and does not encounter any errors or conflicts. - -##### Step 1 - -The client starts the transaction. A transaction coordinator is created to manage the state of that transaction. - -![parallel-commits-00.png](../../images/{{page.version.version}}/parallel-commits-00.png "Parallel Commits Diagram #1") - -##### Step 2 - -The client issues a write to the "Apple" key. The transaction coordinator begins the process of laying down a write intent on the key where the data will be written. The write intent has a timestamp and a pointer to an as-yet nonexistent transaction record. Additionally, each write intent in the transaction is assigned a unique sequence number which is used to uniquely identify it. - -The coordinator avoids creating the record for as long as possible in the transaction's lifecycle as an optimization. The fact that the transaction record does not yet exist is denoted in the diagram by its dotted lines. - -{{site.data.alerts.callout_info}} -The coordinator does not need to wait for write intents to replicate from leaseholders before moving on to the next statement from the client, since that is handled in parallel by [Transaction Pipelining](#transaction-pipelining). -{{site.data.alerts.end}} - -![parallel-commits-01.png](../../images/{{page.version.version}}/parallel-commits-01.png "Parallel Commits Diagram #2") - -##### Step 3 - -The client issues a write to the "Berry" key. The transaction coordinator lays down a write intent on the key where the data will be written. This write intent has a pointer to the same transaction record as the intent created in [Step 2](#step-2), since these write intents are part of the same transaction. - -As before, the coordinator does not need to wait for write intents to replicate from leaseholders before moving on to the next statement from the client. - -![parallel-commits-02.png](../../images/{{page.version.version}}/parallel-commits-02.png "Parallel Commits Diagram #3") - -##### Step 4 - -The client issues a request to commit the transaction's writes. The transaction coordinator creates the transaction record and immediately sets the record's state to `STAGING`, and records the keys of each write that the transaction has in flight. - -It does this without waiting to see whether the writes from Steps [2](#step-2) and [3](#step-3) have succeeded. - -![parallel-commits-03.png](../../images/{{page.version.version}}/parallel-commits-03.png "Parallel Commits Diagram #4") - -##### Step 5 - -The transaction coordinator, having received the client's `COMMIT` request, waits for the pending writes to succeed (i.e., be replicated across the cluster). Once all of the pending writes have succeeded, the coordinator returns a message to the client, letting it know that its transaction has committed successfully. - -![parallel-commits-04.png](../../images/{{page.version.version}}/parallel-commits-04.png "Parallel Commits Diagram #4") - -The transaction is now considered atomically committed, even though the state of its transaction record is still `STAGING`. The reason this is still considered an atomic commit condition is that a transaction is considered committed if it is one of the following logically equivalent states: - -1. The transaction record's state is `STAGING`, and its list of pending writes have all succeeded (i.e., the `InFlightWrites` have achieved consensus across the cluster). Any observer of this transaction can verify that its writes have replicated. Transactions in this state are *implicitly committed*. - -2. The transaction record's state is `COMMITTED`. Transactions in this state are *explicitly committed*. - -Despite their logical equivalence, the transaction coordinator now works as quickly as possible to move the transaction record from the `STAGING` to the `COMMITTED` state so that other transactions do not encounter a possibly conflicting transaction in the `STAGING` state and then have to do the work of verifying that the staging transaction's list of pending writes has succeeded. Doing that verification (also known as the "transaction status recovery process") would be slow. - -Additionally, when other transactions encounter a transaction in `STAGING` state, they check whether the staging transaction is still in progress by verifying that the transaction coordinator is still heartbeating that staging transaction’s record. If the coordinator is still heartbeating the record, the other transactions will wait, on the theory that letting the coordinator update the transaction record with the result of the attempt to commit will be faster than going through the transaction status recovery process. This means that in practice, the transaction status recovery process is only used if the transaction coordinator dies due to an untimely crash. - -## Non-blocking transactions - - CockroachDB supports low-latency, global reads of read-mostly data in [multi-region clusters](../multiregion-overview.html) using _non-blocking transactions_: an extension of the [standard read-write transaction protocol](#overview) that allows a writing transaction to perform [locking](#concurrency-control) in a manner such that contending reads by other transactions can avoid waiting on its locks. - -The non-blocking transaction protocol and replication scheme differ from standard read-write transactions as follows: - -- Non-blocking transactions use a replication scheme over the [ranges](overview.html#architecture-range) they operate on that allows all followers in these ranges to serve consistent (non-stale) reads. -- Non-blocking transactions are minimally disruptive to reads over the data they modify, even in the presence of read/write [contention](../performance-best-practices-overview.html#transaction-contention). - -These properties of non-blocking transactions combine to provide predictable read latency for a configurable subset of data in [global deployments](../multiregion-overview.html). This is useful since there exists a sizable class of data which is heavily skewed towards read traffic. - -Most users will not interact with the non-blocking transaction mechanism directly. Instead, they will [set a `GLOBAL` table locality](../multiregion-overview.html#global-tables) using the SQL API. - -### How non-blocking transactions work - -The consistency guarantees offered by non-blocking transactions are enforced through semi-synchronized clocks with bounded uncertainty, _not_ inter-node communication, since the latter would struggle to provide the same guarantees without incurring excessive latency costs in global deployments. - -Non-blocking transactions are implemented via _non-blocking ranges_. Every non-blocking range has the following properties: - -- Any transaction that writes to this range has its write timestamp pushed into the future. -- The range is able to propagate a [closed timestamp](#closed-timestamps) in the future of present time. -- A transaction that writes to this range and commits with a future time commit timestamp needs to wait until the HLC advances past its commit timestamp. This process is known as _"commit-wait"_. Essentially, the HLC waits until it advances past the future timestamp on its own, or it advances due to updates from other timestamps. -- A transaction that reads a future-time write to this range can have its commit timestamp bumped into the future as well, if the write falls in the read's uncertainty window (this is dictated by the [maximum clock offset](#max-clock-offset-enforcement) configured for the cluster). Such transactions (a.k.a. "conflicting readers") may also need to commit-wait. - -As a result of the above properties, all replicas in a non-blocking range are expected to be able to serve transactionally-consistent reads at the present. This means that all follower replicas in a non-blocking range (which includes all [non-voting replicas](replication-layer.html#non-voting-replicas)) implicitly behave as "consistent read replicas", which are exactly what they sound like: read-only replicas that always have a consistent view of the range's current state. - -## Technical interactions with other layers - -### Transaction and SQL layer - -The transaction layer receives KV operations from `planNodes` executed in the SQL layer. - -### Transaction and distribution layer - -The `TxnCoordSender` sends its KV requests to `DistSender` in the distribution layer. - -## What's next? - -Learn how CockroachDB presents a unified view of your cluster's data in the [distribution layer](distribution-layer.html). - - - -[storage]: storage-layer.html -[sql]: sql-layer.html diff --git a/src/current/v22.1/array.md b/src/current/v22.1/array.md deleted file mode 100644 index f1b60eacaf9..00000000000 --- a/src/current/v22.1/array.md +++ /dev/null @@ -1,377 +0,0 @@ ---- -title: ARRAY -summary: The ARRAY data type stores one-dimensional, 1-indexed, homogeneous arrays of any non-array data types. -toc: true -keywords: gin, gin index, gin indexes, inverted index, inverted indexes, accelerated index, accelerated indexes -docs_area: reference.sql ---- - -The `ARRAY` data type stores one-dimensional, 1-indexed, homogeneous arrays of any non-array [data type](data-types.html). - -The `ARRAY` data type is useful for ensuring compatibility with ORMs and other tools. However, if such compatibility is not a concern, it's more flexible to design your schema with normalized tables. - - CockroachDB supports indexing array columns with [GIN indexes](inverted-indexes.html). This permits accelerating containment queries ([`@>`](functions-and-operators.html#supported-operations) and [`<@`](functions-and-operators.html#supported-operations)) on array columns by adding an index to them. - -{{site.data.alerts.callout_info}} -CockroachDB does not support nested arrays. -{{site.data.alerts.end}} - -## Syntax - -A value of data type `ARRAY` can be expressed in the following ways: - -- Appending square brackets (`[]`) to any non-array [data type](data-types.html). -- Adding the term `ARRAY` to any non-array [data type](data-types.html). - -## Size - -The size of an `ARRAY` value is variable, but it's recommended to keep values under 1 MB to ensure performance. Above that threshold, [write amplification](architecture/storage-layer.html#write-amplification) and other considerations may cause significant performance degradation. - -## Functions - -For the list of supported `ARRAY` functions, see [Functions and Operators](functions-and-operators.html#array-functions). - -## Examples - -### Creating an array column by appending square brackets - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE a (b STRING[]); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO a VALUES (ARRAY['sky', 'road', 'car']); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM a; -~~~ - -~~~ - b ------------------- - {sky,road,car} -(1 row) -~~~ - -### Creating an array column by adding the term `ARRAY` - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE c (d INT ARRAY); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO c VALUES (ARRAY[10,20,30]); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM c; -~~~ - -~~~ - d --------------- - {10,20,30} -(1 row) -~~~ - -### Accessing an array element using array index - -{{site.data.alerts.callout_info}} -Arrays in CockroachDB are 1-indexed. -{{site.data.alerts.end}} - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM c; -~~~ - -~~~ - d --------------- - {10,20,30} -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT d[2] FROM c; -~~~ - -~~~ - d ------- - 20 -(1 row) -~~~ - -### Accessing an array column using containment queries - -You can use the [operators](functions-and-operators.html#supported-operations) `<@` ("is contained by") and `@>` ("contains") to run containment queries on `ARRAY` columns. - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM c WHERE d <@ ARRAY[10,20,30,40,50]; -~~~ - -~~~ - d --------------- - {10,20,30} -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM c WHERE d @> ARRAY[10,20]; -~~~ - -~~~ - d --------------- - {10,20,30} -(1 row) -~~~ - -### Using the overlaps operator - -You can use the `&&` (overlaps) [operator](functions-and-operators.html#supported-operations) to select array columns by checking if another array overlaps the column array. Arrays overlap if they have any elements in common. - -1. Create the table: - - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE TABLE a (b STRING[]); - ~~~ - -1. Insert two new arrays: - - {% include_cached copy-clipboard.html %} - ~~~ sql - INSERT INTO a VALUES (ARRAY['runway', 'houses', 'city', 'clouds']); - INSERT INTO a VALUES (ARRAY['runway', 'houses', 'city']); - INSERT INTO a VALUES (ARRAY['sun','moon']); - ~~~ - -1. Use the `&&` operator in a `WHERE` clause to a query: - - {% include_cached copy-clipboard.html %} - ~~~ sql - SELECT * FROM a WHERE b && ARRAY['clouds','moon']; - ~~~ - - ~~~ - b - ------------------------------- - {runway,houses,city,clouds} - {sun,moon} - (2 rows) - - - Time: 30ms total (execution 2ms / network 28ms) - ~~~ - -### Appending an element to an array - -#### Using the `array_append` function - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM c; -~~~ - -~~~ - d --------------- - {10,20,30} -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> UPDATE c SET d = array_append(d, 40) WHERE d[3] = 30; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM c; -~~~ - -~~~ - d ------------------ - {10,20,30,40} -(1 row) -~~~ - -#### Using the append (`||`) operator - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM c; -~~~ - -~~~ - d ------------------ - {10,20,30,40} -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> UPDATE c SET d = d || 50 WHERE d[4] = 40; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM c; -~~~ - -~~~ - d --------------------- - {10,20,30,40,50} -(1 row) -~~~ - -### Ordering by an array - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE t (a INT ARRAY, b STRING); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO t VALUES (ARRAY[3,4],'threefour'),(ARRAY[1,2],'onetwo'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM t; -~~~ - -~~~ - a | b ---------+------------ - {3,4} | threefour - {1,2} | onetwo -(2 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM t ORDER BY a; -~~~ - -~~~ - a | b ---------+------------ - {1,2} | onetwo - {3,4} | threefour -(2 rows) -~~~ - -## Supported casting and conversion - -[Casting](data-types.html#data-type-conversions-and-casts) between `ARRAY` values is supported when the data types of the arrays support casting. For example, it is possible to cast from a `BOOL` array to an `INT` array but not from a `BOOL` array to a `TIMESTAMP` array: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT ARRAY[true,false,true]::INT[]; -~~~ - -~~~ - array ------------ - {1,0,1} -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT ARRAY[true,false,true]::TIMESTAMP[]; -~~~ - -~~~ -pq: invalid cast: bool[] -> TIMESTAMP[] -~~~ - -You can cast an array to a `STRING` value, for compatibility with PostgreSQL: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT ARRAY[1,NULL,3]::string; -~~~ - -~~~ - array --------------- - {1,NULL,3} -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT ARRAY[(1,'a b'),(2,'c"d')]::string; -~~~ - -~~~ - array ------------------------------------- - {"(1,\"a b\")","(2,\"c\"\"d\")"} -(1 row) -~~~ - -### Implicit casting - -CockroachDB supports implicit casting from string literals to arrays of all data types except the following: - -- [`BYTES`](bytes.html) -- [`ENUM`](enum.html) -- [`JSONB`](jsonb.html) -- [`SERIAL`](serial.html) -- `Box2D` [(spatial type)](spatial-glossary.html#data-types) -- `GEOGRAPHY` [(spatial type)](spatial-glossary.html#data-types) -- `GEOMETRY` [(spatial type)](spatial-glossary.html#data-types) - -For example, if you create a table with a column of type `INT[]`: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE x (a UUID DEFAULT gen_random_uuid() PRIMARY KEY, b INT[]); -~~~ - -And then insert a string containing a comma-delimited set of integers contained in brackets: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO x(b) VALUES ('{1,2,3}'), (ARRAY[4,5,6]); -~~~ - -CockroachDB implicitly casts the string literal as an `INT[]`: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM x; -~~~ - -~~~ - a | b ----------------------------------------+---------- - 2ec0ed91-8a82-4f2e-888e-ae86ece4fc60 | {4,5,6} - a521d6e9-3a2a-490d-968c-1365cace038a | {1,2,3} -(2 rows) -~~~ - -## See also - -- [Data Types](data-types.html) -- [GIN Indexes](inverted-indexes.html) diff --git a/src/current/v22.1/as-of-system-time.md b/src/current/v22.1/as-of-system-time.md deleted file mode 100644 index a61297e4050..00000000000 --- a/src/current/v22.1/as-of-system-time.md +++ /dev/null @@ -1,277 +0,0 @@ ---- -title: AS OF SYSTEM TIME -summary: The AS OF SYSTEM TIME clause executes a statement as of a specified time. -toc: true -docs_area: reference.sql ---- - -The `AS OF SYSTEM TIME timestamp` clause causes statements to execute using the database contents "as of" a specified time in the past. - -You can use this clause to read historical data (also known as "[time travel queries](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/)") and to improve performance by decreasing transaction conflicts. See [Use `AS OF SYSTEM TIME` to decrease conflicts with long-running queries](performance-best-practices-overview.html#use-as-of-system-time-to-decrease-conflicts-with-long-running-queries). - -{{site.data.alerts.callout_info}} -Historical data is available only within the garbage collection window, which is determined by the `ttlseconds` field in the [replication zone configuration](configure-replication-zones.html). -{{site.data.alerts.end}} - -## Synopsis - -The `AS OF SYSTEM TIME` clause is supported in multiple SQL contexts, -including but not limited to: - -- In [`SELECT` clauses](select-clause.html), at the very end of the `FROM` sub-clause. -- In [`BACKUP`](backup.html), after the parameters of the `TO` sub-clause. -- In [`RESTORE`](restore.html), after the parameters of the `FROM` sub-clause. -- In [`BEGIN`](begin-transaction.html), after the `BEGIN` keyword. -- In [`SET`](set-transaction.html), after the `SET TRANSACTION` keyword. - -## Parameters - -The `timestamp` argument supports the following formats: - -Format | Notes ----|--- -[`INT`](int.html) | Nanoseconds since the Unix epoch. -negative [`INTERVAL`](interval.html) | Added to `statement_timestamp()`, and thus must be negative. -[`STRING`](string.html) | A [`TIMESTAMP`](timestamp.html), [`INT`](int.html) of nanoseconds, or negative [`INTERVAL`](interval.html). -`follower_read_timestamp()`| A [function](functions-and-operators.html) that returns the [`TIMESTAMP`](timestamp.html) `statement_timestamp() - 4.8s`. Using this function will set the time as close as possible to the present time while remaining safe for [exact staleness follower reads](follower-reads.html#exact-staleness-reads). -`with_min_timestamp(TIMESTAMPTZ, [nearest_only])` | The minimum [timestamp](timestamp.html) at which to perform the [bounded staleness read](follower-reads.html#bounded-staleness-reads). The actual timestamp of the read may be equal to or later than the provided timestamp, but cannot be before the provided timestamp. This is useful to request a read from nearby followers, if possible, while enforcing causality between an operation at some point in time and any dependent reads. This function accepts an optional `nearest_only` argument that will error if the reads cannot be serviced from a nearby replica. -`with_max_staleness(INTERVAL, [nearest_only])` | The maximum staleness interval with which to perform the [bounded staleness read](follower-reads.html#bounded-staleness-reads). The timestamp of the read can be at most this stale with respect to the current time. This is useful to request a read from nearby followers, if possible, while placing some limit on how stale results can be. Note that `with_max_staleness(INTERVAL)` is equivalent to `with_min_timestamp(now() - INTERVAL)`. This function accepts an optional `nearest_only` argument that will error if the reads cannot be serviced from a nearby replica. - -{{site.data.alerts.callout_success}} -To set `AS OF SYSTEM TIME follower_read_timestamp()` on all implicit and explicit read-only transactions by default, set the `default_transaction_use_follower_reads` [session variable](set-vars.html) to `on`. When `default_transaction_use_follower_reads=on` and follower reads are enabled, all read-only transactions use follower reads. -{{site.data.alerts.end}} - -## Examples - -### Select historical data (time-travel) - -Imagine this example represents the database's current data: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT name, balance - FROM accounts - WHERE name = 'Edna Barath'; -~~~ -~~~ -+-------------+---------+ -| name | balance | -+-------------+---------+ -| Edna Barath | 750 | -| Edna Barath | 2200 | -+-------------+---------+ -~~~ - -We could instead retrieve the values as they were on October 3, 2016 at 12:45 UTC: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT name, balance - FROM accounts - AS OF SYSTEM TIME '2016-10-03 12:45:00' - WHERE name = 'Edna Barath'; -~~~ -~~~ -+-------------+---------+ -| name | balance | -+-------------+---------+ -| Edna Barath | 450 | -| Edna Barath | 2000 | -+-------------+---------+ -~~~ - - -### Using different timestamp formats - -Assuming the following statements are run at `2016-01-01 12:00:00`, they would execute as of `2016-01-01 08:00:00`: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM t AS OF SYSTEM TIME '2016-01-01 08:00:00' -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM t AS OF SYSTEM TIME 1451635200000000000 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM t AS OF SYSTEM TIME '1451635200000000000' -~~~ - -{% include_cached copy-clipboard.html %} -~~~sql -> SELECT * FROM t AS OF SYSTEM TIME '-4h' -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM t AS OF SYSTEM TIME INTERVAL '-4h' -~~~ - -### Selecting from multiple tables - -{{site.data.alerts.callout_info}} -It is not yet possible to select from multiple tables at different timestamps. The entire query runs at the specified time in the past. -{{site.data.alerts.end}} - -When selecting over multiple tables in a single `FROM` clause, the `AS -OF SYSTEM TIME` clause must appear at the very end and applies to the -entire `SELECT` clause. - -For example: - -{% include_cached copy-clipboard.html %} -~~~sql -> SELECT * FROM t, u, v AS OF SYSTEM TIME '-4h'; -~~~ - -{% include_cached copy-clipboard.html %} -~~~sql -> SELECT * FROM t JOIN u ON t.x = u.y AS OF SYSTEM TIME '-4h'; -~~~ - -{% include_cached copy-clipboard.html %} -~~~sql -> SELECT * FROM (SELECT * FROM t), (SELECT * FROM u) AS OF SYSTEM TIME '-4h'; -~~~ - -### Using `AS OF SYSTEM TIME` in subqueries - -To enable time travel, the `AS OF SYSTEM TIME` clause must appear in -at least the top-level statement. It is not valid to use it only in a -[subquery](subqueries.html). - -For example, the following is invalid: - -~~~ -SELECT * FROM (SELECT * FROM t AS OF SYSTEM TIME '-4h'), u -~~~ - -To facilitate the composition of larger queries from simpler queries, -CockroachDB allows `AS OF SYSTEM TIME` in sub-queries under the -following conditions: - -- The top level query also specifies `AS OF SYSTEM TIME`. -- All the `AS OF SYSTEM TIME` clauses specify the same timestamp. - -For example: - -{% include_cached copy-clipboard.html %} -~~~sql -> SELECT * FROM (SELECT * FROM t AS OF SYSTEM TIME '-4h') tp - JOIN u ON tp.x = u.y - AS OF SYSTEM TIME '-4h' -- same timestamp as above - OK. - WHERE x < 123; -~~~ - -### Use `AS OF SYSTEM TIME` in transactions - -You can use the [`BEGIN`](begin-transaction.html) statement to execute the transaction using the database contents "as of" a specified time in the past. - -{% include {{ page.version.version }}/sql/begin-transaction-as-of-system-time-example.md %} - -Alternatively, you can use the [`SET`](set-transaction.html) statement to execute the transaction using the database contents "as of" a specified time in the past. - -{% include {{ page.version.version }}/sql/set-transaction-as-of-system-time-example.md %} - -### Use `AS OF SYSTEM TIME` to recover recently lost data - -It is possible to recover lost data as a result of an online schema change prior to when [garbage collection](architecture/storage-layer.html#garbage-collection) begins: - -{% include_cached copy-clipboard.html %} -~~~sql -> CREATE DATABASE foo; -~~~ -~~~ -CREATE DATABASE - - -Time: 3ms total (execution 3ms / network 0ms) -~~~ -{% include_cached copy-clipboard.html %} -~~~sql -> CREATE TABLE foo.bar (id INT PRIMARY KEY); -~~~ -~~~ -CREATE TABLE - - -Time: 4ms total (execution 3ms / network 0ms) -~~~ -{% include_cached copy-clipboard.html %} -~~~sql -> INSERT INTO foo.bar VALUES (1), (2); -~~~ -~~~ -INSERT 2 - - -Time: 5ms total (execution 5ms / network 0ms) -~~~ -{% include_cached copy-clipboard.html %} -~~~sql -> SELECT now(); -~~~ -~~~ - now --------------------------------- - 2022-02-01 21:11:53.63771+00 -(1 row) - - -Time: 1ms total (execution 0ms / network 0ms) -~~~ -{% include_cached copy-clipboard.html %} -~~~sql -> DROP TABLE foo.bar; -~~~ -~~~ -DROP TABLE - - -Time: 45ms total (execution 45ms / network 0ms) -~~~ -{% include_cached copy-clipboard.html %} -~~~sql -> SELECT * FROM foo.bar AS OF SYSTEM TIME '2022-02-01 21:11:53.63771+00'; -~~~ -~~~ - id ------- - 1 - 2 -(2 rows) - - -Time: 2ms total (execution 2ms / network 0ms) -~~~ -{% include_cached copy-clipboard.html %} -~~~sql -> SELECT * FROM foo.bar; -~~~ -~~~ -ERROR: relation "foo.bar" does not exist -SQLSTATE: 42P01 -~~~ - -{{site.data.alerts.callout_danger}} -Once garbage collection has occurred, `AS OF SYSTEM TIME` will no longer be able to recover lost data. For more long-term recovery solutions, consider taking either a [full or incremental backup](take-full-and-incremental-backups.html) of your cluster. -{{site.data.alerts.end}} - -## See also - -- [Select Historical Data](select-clause.html#select-historical-data-time-travel) -- [Time-Travel Queries](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/) -- [Follower Reads](follower-reads.html) -- [Follower Reads Topology Pattern](topology-follower-reads.html) - -## Tech note - -Although the following format is supported, it is not intended to be used by most users. - -HLC timestamps can be specified using a [`DECIMAL`](decimal.html). The -integer part is the wall time in nanoseconds. The fractional part is -the logical counter, a 10-digit integer. This is the same format as -produced by the `cluster_logical_timestamp()` function. diff --git a/src/current/v22.1/authentication.md b/src/current/v22.1/authentication.md deleted file mode 100644 index 84e89207f80..00000000000 --- a/src/current/v22.1/authentication.md +++ /dev/null @@ -1,310 +0,0 @@ ---- -title: Authenticating to CockroachDB Self-Hosted Clusters -summary: Learn about the authentication features for secure CockroachDB clusters. -toc: true -docs_area: manage ---- - -Authentication refers to the act of verifying the identity of the other party in communication. CockroachDB requires TLS 1.3 digital certificates for inter-node authentication and accepts TLS 1.2 and TLS 1.3 certificates for client-node authentication. This document discusses how CockroachDB uses digital certificates and what your options are for configuring user authentication for SQL clients and the DB Console UI. It also offers a [conceptual overview](#background-on-public-key-cryptography-and-digital-certificates) of public key cryptography and digital certificates. - -- If you are familiar with public key cryptography and digital certificates, then reading the [Using digital certificates with CockroachDB](#using-digital-certificates-with-cockroachdb) section should be enough. -- If you are unfamiliar with public key cryptography and digital certificates, you might want to skip over to the [conceptual overview](#background-on-public-key-cryptography-and-digital-certificates) first and then come back to the [Using digital certificates with CockroachDB](#using-digital-certificates-with-cockroachdb) section. -- If you want to know how to create CockroachDB security certificates, see [Create Security Certificates](cockroach-cert.html). - -## Connecting to a CockroachDB cluster - -Users may connect with CockroachDB {{ site.data.products.core }} clusters in 2 main ways: - -- SQL clients connections, including the CockroachDB CLI client and the [various supported drivers and ORMs](install-client-drivers.html), connect directly to CockroachDB clusters using the [CockroachDB SQL interface](sql-feature-support.html). - -- A read-only monitoring service, which provides cluster and database details, and information useful for troubleshooting and performance tuning. Each CockroachDB node also acts as an HTTP server, providing both a [browser UI DB console](ui-overview.html) and the [cluster API](cluster-api.html), which provides much of the same information as the DB console, but as a rest API suitable for programmatic access. - -## Using digital certificates with CockroachDB - -Each CockroachDB node in a secure cluster must have a **node certificate**, which is a TLS 1.3 certificate. This certificate is multi-functional: the same certificate is presented irrespective of whether the node is acting as a server or a client. The nodes use these certificates to establish secure connections with clients and with other nodes. Node certificates have the following requirements: - -- The hostname or address (IP address or DNS name) used to reach a node, either directly or through a load balancer, must be listed in the **Common Name** or **Subject Alternative Names** fields of the certificate: - - - The values specified in [`--listen-addr`](cockroach-start.html#networking) and [`--advertise-addr`](cockroach-start.html#networking) flags, or the node hostname and fully qualified hostname if not specified - - Any host addresses/names used to reach a specific node - - Any load balancer addresses/names or DNS aliases through which the node could be reached - - `localhost` and local address if connections are made through the loopback device on the same host - -- CockroachDB must be configured to trust the certificate authority that signed the certificate. - -Based on your security setup, you can use the [`cockroach cert` commands](cockroach-cert.html), [`openssl` commands](create-security-certificates-openssl.html), or a [custom CA](create-security-certificates-custom-ca.html) to generate all the keys and certificates. - -A CockroachDB cluster consists of multiple nodes and clients. The nodes can communicate with each other, with the SQL clients, and the DB Console. In client-node SQL communication and client-UI communication, the node acts as a server, but in inter-node communication, a node may act as a server or a client. Hence authentication in CockroachDB involves: - -- Node authentication using [TLS 1.3](https://en.wikipedia.org/wiki/Transport_Layer_Security) digital certificates. -- Client authentication using TLS digital certificates, passwords, or [GSSAPI authentication](gssapi_authentication.html) (for Enterprise users). - -### Node authentication - -To set up a secure cluster without using an existing certificate authority, you'll need to generate the following files: - -- CA certificate -- Node certificate and key -- (Optional) UI certificate and key - -### Client authentication - -CockroachDB offers the following methods for client authentication: - -- **Client certificate and key authentication**, which is available to all users. To ensure the highest level of security, we recommend only using client certificate and key authentication. - - Example: - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --certs-dir=certs --user=jpointsman - ~~~ - -- **Password authentication**, which is available to users and roles who you've created passwords for. Password creation is supported only in secure clusters. - - Example: - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --certs-dir=certs --user=jpointsman - ~~~ - - ~~~ - # Welcome to the CockroachDB SQL shell. - # All statements must be terminated by a semicolon. - # To exit, type: \q. - # - Enter password: - ~~~ - - The client still needs the CA certificate to validate the certificate of the node. - - {{site.data.alerts.callout_success}} - For improved performance, CockroachDB securely caches password authentication information for users. To limit the authentication latency of users logging into a new session, we recommend that you run bulk `ROLE` operations ([`CREATE ROLE`](create-role.html), [`ALTER ROLE`](alter-role.html), [`DROP ROLE`](drop-role.html)) inside a transaction, and run any regularly-scheduled `ROLE` operations together, rather than at different times throughout the day. - {{site.data.alerts.end}} - -- **Password authentication without TLS** - - For deployments where transport security is already handled at the infrastructure level (e.g., IPSec with DMZ), and TLS-based transport security is not possible or not desirable, CockroachDB now supports delegating transport security to the infrastructure with the flag `--accept-sql-without-tls` for [`cockroach start`](cockroach-start.html#security). The `--accept-sql-without-tls` flag is in [preview](cockroachdb-feature-availability.html). - - With this flag, SQL clients can establish a session over TCP without a TLS handshake. They still need to present valid authentication credentials, for example a password in the default configuration. Different authentication schemes can be further configured as per `server.host_based_authentication.configuration`. - - Example: - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --user=jpointsman --insecure - ~~~ - - ~~~ - # Welcome to the CockroachDB SQL shell. - # All statements must be terminated by a semicolon. - # To exit, type: \q. - # - Enter password: - ~~~ - -- [**Single sign-on authentication**](sso.html), which is available to [Enterprise users](enterprise-licensing.html) to grant access to the DB Console. - -- [**GSSAPI authentication**](gssapi_authentication.html), which is available to [Enterprise users](enterprise-licensing.html). - -### Using `cockroach cert` or `openssl` commands - -You can use the [`cockroach cert` commands](cockroach-cert.html) or [`openssl` commands](create-security-certificates-openssl.html) to create the CA certificate and key, and node and client certificates and keys. - -Note that the node certificate created using `cockroach cert` or`openssl` is multi-functional, which means that the same certificate is presented irrespective of whether the node is acting as a server or a client. Thus all nodes must have the following: - -- `CN=node` for the special user `node` when the node acts as a client. -- All IP addresses and DNS names for the node must be listed in the `Subject Alternative Name` field for when the node acts as a server. CockroachDB also supports [wildcard notation in DNS names](https://en.wikipedia.org/wiki/Wildcard_certificate). - -**Node key and certificates** - -A node must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate created using the `cockroach cert` command. -`node.crt` | Server certificate created using the `cockroach cert` command.

`node.crt` must have `CN=node` and the list of IP addresses and DNS names listed in the `Subject Alternative Name` field. CockroachDB also supports [wildcard notation in DNS names](https://en.wikipedia.org/wiki/Wildcard_certificate).

Must be signed by the CA represented by `ca.crt`. -`node.key` | Server key created using the `cockroach cert` command. - -**Client key and certificates** - -A client must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate created using the `cockroach cert` command. -`client..crt` | Client certificate for `` (e.g., `client.root.crt` for user `root`).

Each `client..crt` must have `CN=` (for example, `CN=marc` for `client.marc.crt`).

Must be signed by the CA represented by `ca.crt`. -`client..key` | Client key created using the `cockroach cert` command. - -Alternatively, you can use [password authentication](#client-authentication). Remember, the client still needs `ca.crt` for node authentication. - -### Using a custom CA - -In the previous section, we discussed the scenario where the node and client certificates are signed by the CA created using the `cockroach cert` command. But what if you want to use an external CA, like your organizational CA or a public CA? In that case, our certificates might need some modification. Here’s why: - -As mentioned earlier, the node certificate is multi-functional, as in the same certificate is presented irrespective of whether the node is acting as a server or client. To make the certificate multi-functional, the `node.crt` must have `CN=node` and the list of IP addresses and DNS names listed in the`Subject Alternative Names` field. - -But some CAs will not sign a certificate containing a `CN` that is not an IP address or domain name. Here's why: The TLS client certificates are used to authenticate the client connecting to a server. Because most client certificates authenticate a user instead of a device, the certificates contain usernames instead of hostnames. This makes it difficult for public CAs to verify the client's identity and hence most public CAs will not sign a client certificate. - -To get around this issue, we can split the node key and certificate into two: - -- `node.crt` and `node.key`: The node certificate to be presented when the node acts as a server and the corresponding key. `node.crt` must have the list of IP addresses and DNS names listed in the `Subject Alternative Names`. -- `client.node.crt` and `client.node.key`: The node certificate to be presented when the node acts as a client for another node, and the corresponding key. `client.node.crt` must have `CN=node`. - -**Node key and certificates** - -A node must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate issued by the public CA or your organizational CA. -`node.crt` | Node certificate for when node acts as server.

All IP addresses and DNS names for the node must be listed in the `Subject Alternative Name`. CockroachDB also supports [wildcard notation in DNS names](https://en.wikipedia.org/wiki/Wildcard_certificate).

Must be signed by the CA represented by `ca.crt`. -`node.key` | Server key corresponding to `node.crt`. -`client.node.crt` | Node certificate for when node acts as client.

Must have `CN=node`.

Must be signed by the CA represented by `ca.crt`. -`client.node.key` | Client key corresponding to `client.node.crt`. - -Optionally, if you have a certificate issued by a public CA to securely access the DB Console, you need to place the certificate and key (`ui.crt` and `ui.key` respectively) in the directory specified by the `--certs-dir` flag. - -**Client key and certificates** - -A client must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate issued by the public CA or your organizational CA. -`client..crt` | Client certificate for `` (e.g., `client.root.crt` for user `root`).

Each `client..crt` must have `CN=` (for example, `CN=marc` for `client.marc.crt`).

Must be signed by the CA represented by `ca.crt`. -`client..key` | Client key corresponding to `client..crt`. - -Alternatively, you can use [password authentication](#client-authentication). Remember, the client still needs `ca.crt` for node authentication. - -### Using a public CA certificate to access the DB Console for a secure cluster - -One of the limitations of using `cockroach cert` or `openssl` is that the browsers used to access the DB Console do not trust the node certificates presented to them. Web browsers come preloaded with CA certificates from well-established entities (e.g., GlobalSign and DigiTrust). The CA certificate generated using the `cockroach cert` or `openssl` is not preloaded in the browser. Hence on accessing the DB Console for a secure cluster, you get the “Unsafe page” warning. Now you could add the CA certificate to the browser to avoid the warning, but that is not a recommended practice. Instead, you can use the established CAs (for example, Let’s Encrypt), to create a certificate and key to access the DB Console. - -Once you have the UI cert and key, add it to the Certificates directory specified by the `--certs-dir` flag in the `cockroach cert` command. The next time the browser tries to access the UI, the node will present the UI cert instead of the node cert, and you’ll not see the “unsafe site” warning anymore. - -**Node key and certificates** - -A node must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate created using the `cockroach cert` command. -`node.crt` | Server certificate created using the `cockroach cert` command.

`node.crt` must have `CN=node` and the list of IP addresses and DNS names listed in the `Subject Alternative Name` field. CockroachDB also supports [wildcard notation in DNS names](https://en.wikipedia.org/wiki/Wildcard_certificate).

Must be signed by the CA represented by `ca.crt`. -`node.key` | Server key created using the `cockroach cert` command. -`ui.crt` | UI certificate signed by the public CA. `ui.crt` must have the IP addresses and DNS names used to reach the DB Console listed in the `Subject Alternative Name`. -`ui.key` | UI key corresponding to `ui.crt`. - -**Client key and certificates** - -A client must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate created using the `cockroach cert` command. -`client..crt` | Client certificate for `` (e.g., `client.root.crt` for user `root`).

Each `client..crt` must have `CN=` (for example, `CN=marc` for `client.marc.crt`).

Must be signed by the CA represented by `ca.crt`. -`client..key` | Client key created using the `cockroach cert` command. - -Alternatively, you can use [password authentication](#client-authentication). Remember, the client still needs `ca.crt` for node authentication. - -### Using split CA certificates - -{{site.data.alerts.callout_danger}} -We do not recommend you use split CA certificates unless your organizational security practices mandate you to do so. -{{site.data.alerts.end}} - -You might encounter situations where you need separate CAs to sign and verify node and client certificates. In that case, you would need two CAs and their respective certificates and keys: `ca.crt` and `ca-client.crt`. - -**Node key and certificates** - -A node must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate to verify node certificates. -`ca-client.crt` | CA certificate to verify client certificates. -`node.crt` | Node certificate for when node acts as server.

All IP addresses and DNS names for the node must be listed in the `Subject Alternative Name`. CockroachDB also supports [wildcard notation in DNS names](https://en.wikipedia.org/wiki/Wildcard_certificate).

Must be signed by the CA represented by `ca.crt`. -`node.key` | Server key corresponding to `node.crt`. -`client.node.crt` | Node certificate for when node acts as client. This certificate must be signed by the CA represented by `ca-client.crt`.

Must have `CN=node`. -`client.node.key` | Client key corresponding to `client.node.crt`. - -Optionally, if you have a certificate issued by a public CA to securely access the DB Console, you need to place the certificate and key (`ui.crt` and `ui.key` respectively) in the directory specified by the `--certs-dir` flag. - -**Client key and certificates** - -A client must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate. -`client..crt` | Client certificate for `` (e.g., `client.root.crt` for user `root`).

Each `client..crt` must have `CN=` (for example, `CN=marc` for `client.marc.crt`).

Must be signed by the CA represented by `ca-client.crt`. -`client..key` | Client key corresponding to `client..crt`. - -## Authentication for cloud storage - -See [Use Cloud Storage for Bulk Operations](use-cloud-storage-for-bulk-operations.html). - -## Authentication best practice - -As a security best practice, we recommend that you rotate the node, client, or CA certificates in the following scenarios: - -- The node, client, or CA certificates are expiring soon. -- Your organization's compliance policy requires periodical certificate rotation. -- The key (for a node, client, or CA) is compromised. -- You need to modify the contents of a certificate, for example, to add another DNS name or the IP address of a load balancer through which a node can be reached. In this case, you would need to rotate only the node certificates. - -For details about when and how to change security certificates without restarting nodes, see [Rotate Security Certificates](rotate-certificates.html). - -## Background on public key cryptography and digital certificates - -As mentioned above, CockroachDB supports the [TLS 1.3 and TLS 1.2](https://en.wikipedia.org/wiki/Transport_Layer_Security) security protocols, which take advantage of both symmetric (to encrypt data in flight) as well as asymmetric encryption (to establish a secure channel as well as **authenticate** the communicating parties). - -Authentication refers to the act of verifying the identity of the other party in communication. CockroachDB uses TLS 1.3 digital certificates for inter-node authentication, and your choice of TLS 1.2 and TLS 1.3 certificates for client-node authentication. These authentication methods require a certificate authority (CA) as well as keys and certificates for nodes, clients, and, optionally, the [DB Console](#using-a-public-ca-certificate-to-access-the-db-console-for-a-secure-cluster). - -To understand how CockroachDB uses digital certificates, let's first understand what each of these terms means. - -Consider two people: Amy and Rosa, who want to communicate securely over an insecure computer network. The traditional solution is to use symmetric encryption that involves encrypting and decrypting a plaintext message using a shared key. Amy encrypts her message using the key and sends the encrypted message across the insecure channel. Rosa decrypts the message using the same key and reads the message. This seems like a logical solution until you realize that you need a secure communication channel to send the encryption key. - -To solve this problem, cryptographers came up with **asymmetric encryption** to set up a secure communication channel over which an encryption key can be shared. - -### Asymmetric encryption - -Asymmetric encryption involves a pair of keys instead of a single key. The two keys are called the **public key** and the **private key**. The keys consist of very long numbers linked mathematically in a way such that a message encrypted using a public key can only be decrypted using the private key and vice versa. The message cannot be decrypted using the same key that was used to encrypt the message. - -So going back to our example, Amy and Rosa both have their own public-private key pairs. They keep their private keys safe with themselves and publicly distribute their public keys. Now when Amy wants to send a message to Rosa, she requests Rosa's public key, encrypts the message using Rosa’s public key, and sends the encrypted message. Rosa uses her own private key to decrypt the message. - -But what if a malicious imposter intercepts the communication? The imposter might pose as Rosa and send their public key instead of Rosa’s. There's no way for Amy to know that the public key she received isn’t Rosa’s, so she would end up using the imposter's public key to encrypt the message and send it to the imposter. The imposter can use their own private key and decrypt and read the message, thus compromising the secure communication channel between Amy and Rosa. - -To prevent this security risk, Amy needs to be sure that the public key she received was indeed Rosa’s. That’s where the Certificate Authority (CA) comes into the picture. - -### Certificate authority - -Certificate authorities are established entities with their own public and private key pairs. They act as a root of trust and verify the identities of the communicating parties and validate their public keys. CAs can be public and paid entities (e.g., GeoTrust and Comodo), or public and free CAs (e.g., Let’s Encrypt), or your own organizational CA (e.g., CockroachDB CA). The CAs' public keys are typically widely distributed (e.g., your browser comes preloaded with certs from popular CAs like DigiCert, GeoTrust, and so on). - -Think of the CA as the passport authority of a country. When you want to get your passport as your identity proof, you submit an application to your country's passport authority. The application contains important identifying information about you: your name, address, nationality, date of birth, and so on. The passport authority verifies the information they received and validates your identity. They then issue a document - the passport - that can be presented anywhere in the world to verify your identity. For example, the TSA agent at the airport does not know you and has no reason to trust you are who you say you are. However, they trust the passport authority and thus accept your identity as presented on your passport because it has been verified and issued by the passport authority. - -Going back to our example and assuming that we trust the CA, Rosa needs to get her public key verified by the CA. She sends a CSR (Certificate Signing Request) to the CA that contains her public key and relevant identifying information. The CA will verify that it is indeed Rosa’s public key and information, _sign_ the CSR using the CA's own private key, and generate a digital document called the **digital certificate**. In our passport analogy, this is Rosa's passport containing verified identifying information about her and trusted by everyone who trusts the CA. The next time Rosa wants to establish her identity, she will present her digital certificate. - -### Digital certificate - -A public key is shared using a digital certificate signed by a CA using the CA's private key. The digital certificate contains: - -- The certificate owner’s public key -- Information about the certificate owner -- The CA's digital signature - -### Digital signature - -The CA's digital signature works as follows: The certificate contents are put through a mathematical function to create a **hash value**. This hash value is encrypted using the CA's private key to generate the **digital signature**. The digital signature is added to the digital certificate. In our example, the CA adds their digital signature to Rosa's certificate validating her identity and her public key. - -As discussed [earlier](#certificate-authority), the CA's public key is widely distributed. In our example, Amy already has the CA's public key. Now when Rosa presents her digital certificate containing her public key, Amy uses the CA's public key to decrypt the digital signature on Rosa's certificate and gets the hash value encoded in the digital signature. Amy also generates the hash value for the certificate on her own. If the hash values match, then Amy can be sure that the certificate and hence the public key it contains indeed belongs to Rosa; otherwise, she can determine that the communication channel has been compromised and refuse further contact. - -### How it all works together - -Let's see how the digital certificate is used in client-server communication: The client (e.g., a web browser) has the CA certificate (containing the CA's public key). When the client receives a server's certificate signed by the same CA, it can use the CA certificate to verify the server's certificate, thus validating the server's identity, and securely connect to the server. The important thing here is that the client needs to have the CA certificate. If you use your own organizational CA instead of a publicly established CA, you need to make sure you distribute the CA certificate to all the clients. - -## See also - -- [Client Connection Parameters](connection-parameters.html) -- [Manual Deployment](manual-deployment.html) -- [Orchestrated Deployment](kubernetes-overview.html) -- [Local Deployment](secure-a-cluster.html) -- [`cockroach` Commands Overview](cockroach-commands.html) diff --git a/src/current/v22.1/authorization.md b/src/current/v22.1/authorization.md deleted file mode 100644 index f991253b0bf..00000000000 --- a/src/current/v22.1/authorization.md +++ /dev/null @@ -1,476 +0,0 @@ ---- -title: Managing SQL User Authorization -summary: Learn procedures for managing the lifecycle of SQL users and roles. -toc: true -docs_area: manage ---- - -This page documents procedures for managing the lifecycle of SQL users and roles on CockroachDB clusters. - -For reference documentation and explanation of related concepts, see [Security Reference—Authorization](security-reference/authorization.html). - -## Create CockroachDB users - -Use the [`CREATE USER`](create-user.html) and [`DROP USER`](drop-user.html) statements to create and remove users, the [`ALTER USER`](alter-user.html) statement to add or change a user's password and role options, the [`GRANT`](grant.html) and [`REVOKE`](revoke.html) statements to manage the user’s privileges, and the [`SHOW USERS`](show-users.html) statement to list users. - -A new user must be granted the required privileges for each database and table that the user needs to access. - -{{site.data.alerts.callout_info}} -By default, a new user belongs to the `public` role and has no privileges other than those assigned to the `public` role. -{{site.data.alerts.end}} - - -## Create and manage roles - -To create and manage your cluster's roles, use the following statements: - -Statement | Description -----------|------------ -[`CREATE ROLE`](create-role.html) | Create SQL roles. -[`DROP ROLE`](drop-role.html) | Remove one or more SQL roles. -[`GRANT`](grant.html) | Manage each role or user's SQL privileges for interacting with specific databases and tables, or add a role or user as a member to a role. -[`REVOKE`](revoke.html) | Revoke privileges from users and/or roles, or revoke a role or user's membership to a role. -[`SHOW ROLES`](show-roles.html) | List the roles for all databases. -[`SHOW GRANTS`](show-grants.html) | List the privileges granted to users. - - - - - - - - - - - - - - -For example, suppose a cluster contains a role named `cockroachlabs`, and a user named `max` is a member of the `cockroachlabs` role: - -~~~ -root@localhost:26257/defaultdb> show roles; - username | options | member_of -----------------+---------+------------------ - admin | | {} - cockroachlabs | | {} - max | | {cockroachlabs} - root | | {admin} -(4 rows) -~~~ - -If a user connects to the cluster as `cockroachlabs` and creates a table named `albums`, then any user that is also a member of the `cockroachlabs` role will have `ALL` privileges on that table: - -~~~ -cockroachlabs@localhost:26257/db> CREATE TABLE albums ( - id UUID PRIMARY KEY, - title STRING, - length DECIMAL, - tracklist JSONB -); -~~~ - -~~~ -max@localhost:26257/db> ALTER TABLE albums ADD COLUMN year INT; -ALTER TABLE - - -Time: 1.137s total (execution 1.137s / network 0.000s) - -max@localhost:26257/db> SHOW CREATE TABLE albums; - table_name | create_statement --------------+------------------------------------------------------------ - albums | CREATE TABLE public.albums ( - | id UUID NOT NULL, - | title STRING NULL, - | length DECIMAL NULL, - | tracklist JSONB NULL, - | year INT8 NULL, - | CONSTRAINT "primary" PRIMARY KEY (id ASC), - | FAMILY "primary" (id, title, length, tracklist, year) - | ) -(1 row) -~~~ - - - - - -## Example - -
- - -
- -
- -The following example uses MovR, a fictional vehicle-sharing application, to demonstrate CockroachDB [SQL statements](sql-statements.html). For more information about the MovR example application and dataset, see [MovR: A Global Vehicle-sharing App](movr.html). - -Let's say we want to create the following access control setup for the `movr` database: - -- One database admin (named `db_admin`) who can perform all database operations for existing tables as well as for tables added in the future. -- One app user (named `app_user`) who can add, read update, and delete vehicles from the `vehicles` table. -- One user (named `report_user`) who can only read the `vehicles` table. - -1. Use the [`cockroach demo`](cockroach-demo.html) command to load the `movr` database and dataset into a CockroachDB cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach demo - ~~~ - -2. Create the database admin (named `db_admin`) who can perform all database operations for existing tables as well as for tables added in the future: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER db_admin; - ~~~ - -3. Grant all privileges on database `movr` to user `db_admin`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > GRANT ALL ON DATABASE movr TO db_admin; - ~~~ - -4. Grant all privileges on all tables in database `movr` to user `db_admin`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > GRANT ALL ON TABLE * TO db_admin; - ~~~ - -5. Verify that `db_admin` has all privileges: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW GRANTS FOR db_admin; - ~~~ - - ~~~ - database_name | schema_name | table_name | grantee | privilege_type - +---------------+--------------------+----------------------------+----------+----------------+ - movr | crdb_internal | NULL | db_admin | ALL - movr | information_schema | NULL | db_admin | ALL - movr | pg_catalog | NULL | db_admin | ALL - movr | public | NULL | db_admin | ALL - movr | public | promo_codes | db_admin | ALL - movr | public | rides | db_admin | ALL - movr | public | user_promo_codes | db_admin | ALL - movr | public | users | db_admin | ALL - movr | public | vehicle_location_histories | db_admin | ALL - movr | public | vehicles | db_admin | ALL - (10 rows) - ~~~ - -6. As the `root` user, create a SQL user named `app_user` with permissions to add, read, update, and delete vehicles in the `vehicles` table: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER app_user; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > GRANT INSERT, DELETE, UPDATE, SELECT ON vehicles TO app_user; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW GRANTS FOR app_user; - ~~~ - - ~~~ - database_name | schema_name | table_name | grantee | privilege_type - +---------------+-------------+------------+----------+----------------+ - movr | public | vehicles | app_user | DELETE - movr | public | vehicles | app_user | INSERT - movr | public | vehicles | app_user | SELECT - movr | public | vehicles | app_user | UPDATE - (4 rows) - ~~~ - -7. As the `root` user, create a SQL user named `report_user` with permissions to only read from the `vehicles` table: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER report_user; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > GRANT SELECT ON vehicles TO report_user; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW GRANTS FOR report_user; - ~~~ - - ~~~ - database_name | schema_name | table_name | grantee | privilege_type - +---------------+-------------+------------+-------------+----------------+ - movr | public | vehicles | report_user | SELECT - (1 row) - ~~~ - -
- -
- -The following example uses MovR, a fictional vehicle-sharing application, to demonstrate CockroachDB SQL statements. For more information about the MovR example application and dataset, see [MovR: A Global Vehicle-sharing App](movr.html). - -Let's say we want to create the following access control setup for the `movr` database: - -- Two database admins (named `db_admin_1` and `db_admin_2`) who can perform all database operations for existing tables as well as for tables added in the future. -- Three app users (named `app_user_1`, `app_user_2`, and `app_user_3`) who can add, read update, and delete vehicles from the `vehicles` table. -- Five users (named `report_user_1`, `report_user_2`, `report_user_3`, `report_user_4`, `report_user_5`) who can only read the `vehicles` table. - -1. Use the [`cockroach demo`](cockroach-demo.html) command to load the `movr` database and dataset into a CockroachDB cluster.: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach demo - ~~~ - -2. Create the database admin role (named `db_admin_role`) whose members can perform all database operations for existing tables as well as for tables added in the future: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE ROLE db_admin_role; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW ROLES; - ~~~ - - ~~~ - username | options | member_of - ----------------+------------+------------ - admin | CREATEROLE | {} - db_admin_role | NOLOGIN | {} - root | CREATEROLE | {admin} - (3 rows) - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > GRANT ALL ON DATABASE movr TO db_admin_role; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > GRANT ALL ON TABLE * TO db_admin_role; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW GRANTS ON DATABASE movr; - ~~~ - - ~~~ - database_name | schema_name | grantee | privilege_type - ----------------+--------------------+---------------+----------------- - movr | crdb_internal | admin | ALL - movr | crdb_internal | db_admin_role | ALL - movr | crdb_internal | root | ALL - movr | information_schema | admin | ALL - movr | information_schema | db_admin_role | ALL - movr | information_schema | root | ALL - movr | pg_catalog | admin | ALL - movr | pg_catalog | db_admin_role | ALL - movr | pg_catalog | root | ALL - movr | public | admin | ALL - movr | public | db_admin_role | ALL - movr | public | root | ALL - (12 rows) - ~~~ - -3. Create two database admin users (named `db_admin_1` and `db_admin_2`) and grant them membership to the `db_admin_role` role: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER db_admin_1; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER db_admin_2; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > GRANT db_admin_role TO db_admin_1, db_admin_2; - ~~~ - -4. Create a role named `app_user_role` whose members can add, read update, and delete vehicles to the `vehicles` table. - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE ROLE app_user_role; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW ROLES; - ~~~ - - ~~~ - username | options | member_of - ----------------+------------+------------------ - admin | CREATEROLE | {} - app_user_role | NOLOGIN | {} - db_admin_1 | | {db_admin_role} - db_admin_2 | | {db_admin_role} - db_admin_role | NOLOGIN | {} - root | CREATEROLE | {admin} - (6 rows) - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > GRANT INSERT, UPDATE, DELETE, SELECT ON TABLE vehicles TO app_user_role; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW GRANTS ON vehicles; - ~~~ - - ~~~ - database_name | schema_name | table_name | grantee | privilege_type - ----------------+-------------+------------+---------------+----------------- - movr | public | vehicles | admin | ALL - movr | public | vehicles | app_user_role | DELETE - movr | public | vehicles | app_user_role | INSERT - movr | public | vehicles | app_user_role | SELECT - movr | public | vehicles | app_user_role | UPDATE - movr | public | vehicles | db_admin_role | ALL - movr | public | vehicles | root | ALL - (7 rows) - ~~~ - -5. Create three app users (named `app_user_1`, `app_user_2`, and `app_user_3`) and grant them membership to the `app_user_role` role: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER app_user_1; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER app_user_2; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER app_user_3; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > GRANT app_user_role TO app_user_1, app_user_2, app_user_3; - ~~~ - -6. Create a role named `report_user_role` whose members can only read the `vehicles` table. - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE ROLE report_user_role; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW ROLES; - ~~~ - - ~~~ - username | options | member_of - -------------------+------------+------------------ - admin | CREATEROLE | {} - app_user_1 | | {app_user_role} - app_user_2 | | {app_user_role} - app_user_3 | | {app_user_role} - app_user_role | NOLOGIN | {} - db_admin_1 | | {db_admin_role} - db_admin_2 | | {db_admin_role} - db_admin_role | NOLOGIN | {} - report_user_role | NOLOGIN | {} - root | CREATEROLE | {admin} - (10 rows) - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > GRANT SELECT ON vehicles TO report_user_role; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW GRANTS ON vehicles; - ~~~ - - ~~~ - database_name | schema_name | table_name | grantee | privilege_type - ----------------+-------------+------------+------------------+----------------- - movr | public | vehicles | admin | ALL - movr | public | vehicles | app_user_role | DELETE - movr | public | vehicles | app_user_role | INSERT - movr | public | vehicles | app_user_role | SELECT - movr | public | vehicles | app_user_role | UPDATE - movr | public | vehicles | db_admin_role | ALL - movr | public | vehicles | report_user_role | SELECT - movr | public | vehicles | root | ALL - (8 rows) - ~~~ - -7. Create five report users (named `report_user_1`, `report_user_2`, `report_user_3`, `report_user_4`, and `report_user_5`) and grant them membership to the `report_user_role` role: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER report_user_1; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER report_user_2; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER report_user_3; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER report_user_4; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER report_user_5; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > GRANT report_user_role TO report_user_1, report_user_2, report_user_3, report_user_4, report_user_5; - ~~~ - -
- -## See also - -- [Client Connection Parameters](connection-parameters.html) -- [SQL Statements](sql-statements.html) -- [`CREATE USER`](create-user.html) -- [`ALTER USER`](alter-user.html) -- [`DROP USER`](drop-user.html) -- [`SHOW USERS`](show-users.html) -- [`CREATE ROLE`](create-role.html) -- [`DROP ROLE`](drop-role.html) -- [`SHOW ROLES`](show-roles.html) -- [`GRANT`](grant.html) -- [`REVOKE`](revoke.html) -- [`SHOW GRANTS`](show-grants.html) diff --git a/src/current/v22.1/aws-dms.md b/src/current/v22.1/aws-dms.md deleted file mode 100644 index ebb78d27791..00000000000 --- a/src/current/v22.1/aws-dms.md +++ /dev/null @@ -1,347 +0,0 @@ ---- -title: Migrate with AWS Database Migration Service (DMS) -summary: Learn how to use AWS Database Migration Service (DMS) to migrate data to a CockroachDB target cluster. -toc: true -docs_area: migrate ---- - -This page has instructions for setting up [AWS Database Migration Service (DMS)](https://aws.amazon.com/dms/) to migrate data to CockroachDB from an existing, publicly hosted database containing application data such as MySQL, Oracle, or PostgreSQL. - -For a detailed tutorial about using AWS DMS and information about specific migration tasks, see the [AWS DMS documentation site](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html). - -{{site.data.alerts.callout_info}} -{% include feature-phases/preview.md %} -{{site.data.alerts.end}} - -For any issues related to AWS DMS, aside from its interaction with CockroachDB as a migration target, contact [AWS Support](https://aws.amazon.com/contact-us/). - -{{site.data.alerts.callout_info}} -Using CockroachDB as a source database within AWS DMS is unsupported. -{{site.data.alerts.end}} - -## Before you begin - -Complete the following items before starting this tutorial: - -- Configure a [replication instance](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Creating.html) in AWS. -- Configure a [source endpoint](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.html) in AWS pointing to your source database. -- Ensure you have a secure, publicly available CockroachDB cluster running the latest **{{ page.version.version }}** [production release](../releases/index.html). - - If your CockroachDB cluster is running v22.1.14 or later, set the following session variable using [`ALTER ROLE ... SET {session variable}`](alter-role.html#set-default-session-variable-values-for-a-role): - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER ROLE {username} SET copy_from_retries_enabled = true; - ~~~ - - This prevents a potential issue when migrating especially large tables with millions of rows. - -- If you are migrating to a CockroachDB {{ site.data.products.cloud }} cluster and plan to [use replication as part of your migration strategy](#step-2-1-task-configuration), you must first **disable** [revision history for cluster backups](take-backups-with-revision-history-and-restore-from-a-point-in-time.html) for the migration to succeed. - {{site.data.alerts.callout_danger}} - You will not be able to run a [point-in-time restore](take-backups-with-revision-history-and-restore-from-a-point-in-time.html#point-in-time-restore) as long as revision history for cluster backups is disabled. Once you [verify that the migration succeeded](#step-3-verify-the-migration), you should re-enable revision history. - {{site.data.alerts.end}} - - - If the output of [`SHOW SCHEDULES`](show-schedules.html) shows any backup schedules, run [`ALTER BACKUP SCHEDULE {schedule_id} SET WITH revision_history = 'false'`](../{{site.current_cloud_version}}/alter-backup-schedule.html) for each backup schedule. - - If the output of `SHOW SCHEDULES` does not show backup schedules, [contact Support](https://support.cockroachlabs.com) to disable revision history for cluster backups. -- Manually create all schema objects in the target CockroachDB cluster. AWS DMS can create a basic schema, but does not create indexes or constraints such as foreign keys and defaults. - - If you are migrating from a PostgreSQL database, [use the **Schema Conversion Tool**](../cockroachcloud/migrations-page.html) to convert and export your schema. Ensure that any schema changes are also reflected on your PostgreSQL tables, or add [transformation rules](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Transformations.html). If you make substantial schema changes, the AWS DMS migration may fail. - - {{site.data.alerts.callout_info}} - All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview](migration-overview.html#step-1-test-and-update-your-schema). - {{site.data.alerts.end}} - -As of publishing, AWS DMS supports migrations from these relational databases (for a more accurate view of what is currently supported, see [Sources for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Introduction.Sources.html)): - -- Amazon Aurora -- Amazon DocumentDB (with MongoDB compatibility) -- Amazon S3 -- IBM Db2 (LUW edition only) -- MariaDB -- Microsoft Azure SQL -- Microsoft SQL Server -- MongoDB -- MySQL -- Oracle -- PostgreSQL -- SAP ASE - -## Step 1. Create a target endpoint pointing to CockroachDB - -1. In the AWS Console, open **AWS DMS**. -1. Open **Endpoints** in the sidebar. A list of endpoints will display, if any exist. -1. In the top-right portion of the window, select **Create endpoint**. - AWS-DMS-Create-Endpoint - - A configuration page will open. -1. In the **Endpoint type** section, select **Target endpoint**. -1. Supply an **Endpoint identifier** to identify the new target endpoint. -1. In the **Target engine** dropdown, select **PostgreSQL**. -1. Under **Access to endpoint database**, select **Provide access information manually**. -1. Enter the **Server name** and **Port** of your CockroachDB cluster. -1. Supply a **User name**, **Password**, and **Database name** from your CockroachDB cluster. - {{site.data.alerts.callout_info}} - To connect to a CockroachDB {{ site.data.products.serverless }} cluster, set the **Database name** to `{serverless-hostname}.{database-name}`. For details on how to find these parameters, see [Connect to a CockroachDB Serverless cluster](../cockroachcloud/connect-to-a-basic-cluster.html?filters=connection-parameters#connect-to-your-cluster). Also set **Secure Socket Layer (SSL) mode** to **require**. - {{site.data.alerts.end}} - AWS-DMS-Endpoint-Configuration -1. If needed, you can test the connection under **Test endpoint connection (optional)**. -1. To create the endpoint, select **Create endpoint**. - AWS-DMS-Test-Endpoint - -## Step 2. Create a database migration task - -A database migration task, also known as a replication task, controls what data are moved from the source database to the target database. - -### Step 2.1. Task configuration - -1. While in **AWS DMS**, select **Database migration tasks** in the sidebar. A list of database migration tasks will display, if any exist. -1. In the top-right portion of the window, select **Create task**. - AWS-DMS-Create-DB-Migration-Task - - A configuration page will open. -1. Supply a **Task identifier** to identify the replication task. -1. Select the **Replication instance** and **Source database endpoint** you created prior to starting this tutorial. -1. For the **Target database endpoint** dropdown, select the CockroachDB endpoint created in the previous section. -1. Select the appropriate **Migration type** based on your needs. - - {{site.data.alerts.callout_danger}} - If you choose **Migrate existing data and replicate ongoing changes** or **Replicate data changes only**, you must first [disable revision history for backups](#before-you-begin). - {{site.data.alerts.end}} - AWS-DMS-Task-Configuration - -### Step 2.2. Task settings - -1. For the **Editing mode** radio button, keep **Wizard** selected. -1. To preserve the schema you manually created, select **Truncate** or **Do nothing** for the **Target table preparation mode**. - AWS-DMS-Task-Settings -1. Check the **Enable CloudWatch logs** option. We highly recommend this for troubleshooting potential migration issues. -1. For the **Target Load**, select **Detailed debug**. - AWS-DMS-CloudWatch-Logs - -### Step 2.3. Table mappings - -{{site.data.alerts.callout_info}} -When specifying a range of tables to migrate, the following aspects of the source and target database schema **must** match unless you use [transformation rules](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Transformations.html): - -- Column names must be identical. -- Column types must be compatible. -- Column nullability must be identical. -{{site.data.alerts.end}} - -1. For the **Editing mode** radio button, keep **Wizard** selected. -1. Select **Add new selection rule**. -1. In the **Schema** dropdown, select **Enter a schema**. -1. Supply the appropriate **Source name** (schema name), **Table name**, and **Action**. - AWS-DMS-Table-Mappings - -{{site.data.alerts.callout_info}} -Use `%` as an example of a wildcard for all schemas in a PostgreSQL database. However, in MySQL, using `%` as a schema name imports all the databases, including the metadata/system ones, as MySQL treats schemas and databases as the same. -{{site.data.alerts.end}} - -## Step 3. Verify the migration - -Data should now be moving from source to target. You can analyze the **Table Statistics** page for information about replication. - -1. In **AWS DMS**, open **Database migration tasks** in the sidebar. -1. Select the task you created in Step 2. -1. Select **Table statistics** below the **Summary** section. - -If your migration succeeded, you should now [re-enable revision history](#before-you-begin) for cluster backups. - -If your migration failed for some reason, you can check the checkbox next to the table(s) you wish to re-migrate and select **Reload table data**. - -AWS-DMS-Reload-Table-Data - -## Optional configurations - -### AWS PrivateLink - -If using CockroachDB {{ site.data.products.dedicated }}, you can enable [AWS PrivateLink](https://aws.amazon.com/privatelink/) to securely connect your AWS application with your CockroachDB {{ site.data.products.dedicated }} cluster using a private endpoint. To configure AWS PrivateLink with CockroachDB {{ site.data.products.dedicated }}, see [Network Authorization](../cockroachcloud/network-authorization.html#aws-privatelink). - -### `BatchApplyEnabled` - -The `BatchApplyEnabled` setting can improve replication performance and is recommended for larger workloads. - -1. Open the existing database migration task. -1. Choose your task, and then choose **Modify**. -1. From the **Task settings** section, switch the **Editing mode** from **Wizard** to **JSON editor**. Locate the `BatchApplyEnabled` setting and change its value to `true`. Information about the `BatchApplyEnabled` setting can be found [here](https://aws.amazon.com/premiumsupport/knowledge-center/dms-batch-apply-cdc-replication/). - -AWS-DMS-BatchApplyEnabled - -{{site.data.alerts.callout_info}} -`BatchApplyEnabled` does not work when using **Drop tables on target** as a target table preparation mode. Thus, all schema-related changes must be manually copied over if using `BatchApplyEnabled`. -{{site.data.alerts.end}} - -## Known limitations - -- When using **Truncate** or **Do nothing** as a target table preparation mode, you cannot include tables with any hidden columns. You can verify which tables contain hidden columns by executing the following SQL query: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT table_catalog, table_schema, table_name, column_name FROM information_schema.columns WHERE is_hidden = 'YES'; - ~~~ - -- If you select **Enable validation** in your [task settings](#step-2-2-task-settings) and have a [`TIMESTAMP`/`TIMESTAMPTZ`](timestamp.html) column in your database, the migration will fail with the following error: - - ~~~ - Suspending the table : 1 from validation since we received an error message : ERROR: unknown signature: to_char(timestamp, string); No query has been executed with that handle with type : non-retryable(0) (partition_validator.c:514) - ~~~ - - This is resolved in v22.2.1. On earlier versions, do not select the **Enable validation** option if your database has a `TIMESTAMP`/`TIMESTAMPTZ` column. - -- If you are migrating from PostgreSQL, are using a [`STRING`](string.html) as a [`PRIMARY KEY`](primary-key.html), and have selected **Enable validation** in your [task settings](#step-2-2-task-settings), validation can fail due to a difference in how CockroachDB handles case sensitivity in strings. - - To prevent this error, use `COLLATE "C"` on the relevant columns in PostgreSQL or a [collation](collate.html) such as `COLLATE "en_US"` in CockroachDB. - -- A migration to a [multi-region cluster](multiregion-overview.html) using AWS DMS will fail if the target database has [regional by row tables](multiregion-overview.html#regional-by-row-tables). This is because the `COPY` statement used by DMS is unable to process the `crdb_region` column in regional by row tables. - - To prevent this error, [set the regional by row table localities to `REGIONAL BY TABLE`](set-locality.html#set-the-table-locality-to-regional-by-row) and perform the migration. After the DMS operation is complete, [set the table localities to `REGIONAL BY ROW`](set-locality.html#set-the-table-locality-to-regional-by-row). - -## Troubleshooting common issues - -- For visibility into migration problems: - - - Check the `SQL_EXEC` [logging channel](logging-overview.html#logging-channels) for log messages related to `COPY` statements and the tables you are migrating. - - Check the [Amazon CloudWatch logs that you configured](#step-2-2-task-settings) for messages containing `SQL_ERROR`. - -- If you encounter errors like the following: - - ~~~ - 2022-10-21T13:24:07 [SOURCE_UNLOAD ]W: Value of column 'metadata' in table 'integrations.integration' was truncated to 32768 bytes, actual length: 116664 bytes (postgres_endpoint_unload.c:1072) - ~~~ - - Try selecting **Full LOB mode** in your [task settings](#step-2-2-task-settings). If this does not resolve the error, select **Limited LOB mode** and gradually increase the **Maximum LOB size** until the error goes away. For more information about LOB (large binary object) modes, see the [AWS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.LOBSupport.html). - -- If you encounter a `TransactionRetryWithProtoRefreshError` error in the [Amazon CloudWatch logs](#step-2-2-task-settings) or [CockroachDB logs](logging-overview.html) when migrating an especially large table with millions of rows, and are running v22.1.14 or later, set the following session variable using [`ALTER ROLE ... SET {session variable}`](alter-role.html#set-default-session-variable-values-for-a-role): - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER ROLE {username} SET copy_from_retries_enabled = true; - ~~~ - - Then retry the migration. - -- Run the following query from within the target CockroachDB cluster to identify common problems with any tables that were migrated. If problems are found, explanatory messages will be returned in the `cockroach sql` shell. - - {% include_cached copy-clipboard.html %} - ~~~ sql - > WITH - invalid_columns - AS ( - SELECT - 'Table ' - || table_schema - || '.' - || table_name - || ' has column ' - || column_name - || ' which is hidden. Either drop the column or mark it as not hidden for DMS to work.' - AS fix_me - FROM - information_schema.columns - WHERE - is_hidden = 'YES' - AND table_name NOT LIKE 'awsdms_%' - ), - invalid_version - AS ( - SELECT - 'This cluster is on a version of CockroachDB which does not support AWS DMS. CockroachDB v21.2.13+ or v22.1+ is required.' - AS fix_me - WHERE - split_part( - substr( - substring( - version(), - e'v\\d+\\.\\d+.\\d+' - ), - 2 - ), - '.', - 1 - )::INT8 - < 22 - AND NOT - ( - split_part( - substr( - substring( - version(), - e'v\\d+\\.\\d+.\\d+' - ), - 2 - ), - '.', - 1 - )::INT8 - = 21 - AND split_part( - substr( - substring( - version(), - e'v\\d+\\.\\d+.\\d+' - ), - 2 - ), - '.', - 2 - )::INT8 - = 2 - AND split_part( - substr( - substring( - version(), - e'v\\d+\\.\\d+.\\d+' - ), - 2 - ), - '.', - 3 - )::INT8 - >= 13 - ) - ), - has_no_pk - AS ( - SELECT - 'Table ' - || a.table_schema - || '.' - || a.table_name - || ' has column ' - || a.column_name - || ' has no explicit PRIMARY KEY. Ensure you are not using target mode "Drop tables on target" and that this table has a PRIMARY KEY.' - AS fix_me - FROM - information_schema.key_column_usage AS a - JOIN information_schema.columns AS b ON - a.table_schema = b.table_schema - AND a.table_name = b.table_name - AND a.column_name = b.column_name - WHERE - b.is_hidden = 'YES' - AND a.column_name = 'rowid' - AND a.table_name NOT LIKE 'awsdms_%' - ) - SELECT fix_me FROM has_no_pk - UNION ALL SELECT fix_me FROM invalid_columns - UNION ALL SELECT fix_me FROM invalid_version; - ~~~ - -- Refer to Debugging Your AWS DMS Migrations ([Part 1](https://aws.amazon.com/blogs/database/debugging-your-aws-dms-migrations-what-to-do-when-things-go-wrong-part-1/), [Part 2](https://aws.amazon.com/blogs/database/debugging-your-aws-dms-migrations-what-to-do-when-things-go-wrong-part-2/), and [Part 3](https://aws.amazon.com/blogs/database/debugging-your-aws-dms-migrations-what-to-do-when-things-go-wrong-part-3/)) on the AWS Database Blog. - -- If the migration is still failing, [contact Support](https://support.cockroachlabs.com) and include the following information when filing an issue: - - Source database name. - - CockroachDB version. - - Source database schema. - - CockroachDB database schema. - - Any relevant logs (e.g., the last 100 lines preceding the AWS DMS failure). - - Ideally, a sample dataset formatted as a database dump file or CSV. - -## See Also - -- [Migration Overview](migration-overview.html) -- [Schema Conversion Tool](../cockroachcloud/migrations-page.html) -- [`cockroach demo`](cockroach-demo.html) -- [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) -- [Client connection parameters](connection-parameters.html) -- [Third-Party Database Tools](third-party-database-tools.html) -- [Learn CockroachDB SQL](learn-cockroachdb-sql.html) diff --git a/src/current/v22.1/backup-and-restore-overview.md b/src/current/v22.1/backup-and-restore-overview.md deleted file mode 100644 index 579c6dce13b..00000000000 --- a/src/current/v22.1/backup-and-restore-overview.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -title: Backup and Restore Overview -summary: An overview of features in backup and restore at CockroachDB. -toc: true -docs_area: manage ---- - -This page provides an overview of the backup and restore features available in CockroachDB: - -- [Types of backup available in CockroachDB](#cockroachdb-backup-types) -- [Backup and restore product support](#backup-and-restore-product-support) -- [SQL statements](#backup-and-restore-sql-statements) for working with backups and restores -- [Storage](#backup-storage) for backups - -You can create full or incremental backups of a [cluster](backup.html#backup-a-cluster), [database](backup.html#backup-a-database), or [table](backup.html#backup-a-table-or-view). Taking regular backups of your data is an operational best practice. - -For an explanation of how a backup works, see [Backup Architecture](backup-architecture.html). - -## CockroachDB backup types - -{% include cockroachcloud/backup-types.md %} - -## Backup and restore product support - -This table outlines the level of product support for backup and restore features in CockroachDB. See each of the pages linked in the table for usage examples: - -Backup / Restore | Description | Product Support -------------------+--------------+----------------- -[Full backup](take-full-and-incremental-backups.html) | An un-replicated copy of your cluster, database, or table's data. A full backup is the base for any further backups. |
  • All products (Enterprise license not required)
    • -[Incremental backup](take-full-and-incremental-backups.html) | A copy of the changes in your data since the specified base backup (either a full backup or a full backup plus an incremental backup). |
      • CockroachDB {{ site.data.products.serverless }} — self-managed backups
      • CockroachDB {{ site.data.products.dedicated }} — managed backups and self-managed backups
      • CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license](enterprise-licensing.html)
        • -[Scheduled backup](manage-a-backup-schedule.html) | A schedule for periodic backups. |
          • CockroachDB {{ site.data.products.serverless }} — self-managed backups
          • CockroachDB {{ site.data.products.dedicated }} — self-managed backups
          • CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license](enterprise-licensing.html)
            • -[Backups with revision history](take-backups-with-revision-history-and-restore-from-a-point-in-time.html) | A backup with revision history allows you to back up every change made within the garbage collection period leading up to and including the given timestamp. |
              • CockroachDB {{ site.data.products.serverless }} — self-managed backups
              • CockroachDB {{ site.data.products.dedicated }} — self-managed backups
              • CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license](enterprise-licensing.html)
                • -[Point-in-time restore](take-backups-with-revision-history-and-restore-from-a-point-in-time.html) | A restore from an arbitrary point in time within the revision history of a backup. |
                  • CockroachDB {{ site.data.products.serverless }} — self-managed backups
                  • CockroachDB {{ site.data.products.dedicated }} — self-managed backups
                  • CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license](enterprise-licensing.html)
                    • -[Encrypted backup and restore](take-and-restore-encrypted-backups.html) | An encrypted backup using a KMS or passphrase. |
                      • CockroachDB {{ site.data.products.serverless }} — self-managed backups
                      • CockroachDB {{ site.data.products.dedicated }} — self-managed backups
                      • CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license](enterprise-licensing.html)
                        • -[Locality-aware backup and restore](take-and-restore-locality-aware-backups.html) | A backup where each node writes files only to the backup destination that matches the node locality configured at node startup. |
                          • CockroachDB {{ site.data.products.serverless }} — self-managed backups
                          • CockroachDB {{ site.data.products.dedicated }} — self-managed backups
                          • CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license](enterprise-licensing.html)
                            • - -## Backup and restore SQL statements - -The following table outlines SQL statements you can use to create, configure, pause, and show backup and restore jobs: - - SQL Statement | Description -----------------|--------------------------------------------------------------------------------------------- -[`BACKUP`](backup.html) | Create full and incremental backups. -[`SHOW JOBS`](show-jobs.html) | Show a list of all running jobs or show the details of a specific job by its `job ID`. -[`PAUSE JOB`](pause-job.html) | Pause a backup or restore job with its `job ID`. -[`RESUME JOB`](resume-job.html) | Resume a backup or restore job with its `job ID`. -[`CANCEL JOB`](cancel-job.html) | Cancel a backup or restore job with its `job ID`. -[`SHOW BACKUP`](show-backup.html) | Show a backup's details at the [backup collection's](take-full-and-incremental-backups.html#backup-collections) storage location. -[`RESTORE`](restore.html) | Restore full and incremental backups. -[`ALTER BACKUP`](alter-backup.html) | **New in v22.1:** Add a new [KMS encryption key](take-and-restore-encrypted-backups.html#use-key-management-service) to an encrypted backup. -[`CREATE SCHEDULE FOR BACKUP`](create-schedule-for-backup.html) | Create a schedule for periodic backups. -[`SHOW SCHEDULES`](show-schedules.html) | View information on backup schedules. -[`PAUSE SCHEDULES`](pause-schedules.html) | Pause backup schedules. -[`RESUME SCHEDULES`](resume-schedules.html) | Resume paused backup schedules. -[`DROP SCHEDULES`](drop-schedules.html) | Drop backup schedules. - -## Backup storage - -We recommend taking backups to [cloud storage](use-cloud-storage-for-bulk-operations.html) and enabling object locking to protect the validity of your backups. CockroachDB supports Amazon S3, Azure Storage, and Google Cloud Storage for backups. Read the following usage information: - -- [Example file URLs](use-cloud-storage-for-bulk-operations.html#example-file-urls) to form the URL that you pass to `BACKUP` and `RESTORE` statements. -- [Authentication](use-cloud-storage-for-bulk-operations.html#authentication) to set up authentication to a cloud storage bucket and include those credentials in the URL. - -For detail on additional cloud storage features CockroachDB supports: - -- [Object locking](use-cloud-storage-for-bulk-operations.html#object-locking) to prevent backups from being overwritten or deleted. -- [Storage Class (AWS S3 only)](use-cloud-storage-for-bulk-operations.html#amazon-s3-storage-classes) to set a specific storage class for your backups. - -## See also - -- Considerations for using [backup](backup.html#considerations) and [restore](restore.html#considerations) -- [Backup collections](take-full-and-incremental-backups.html#backup-collections) for details on how CockroachDB stores backups diff --git a/src/current/v22.1/backup-architecture.md b/src/current/v22.1/backup-architecture.md deleted file mode 100644 index 2a4c501229c..00000000000 --- a/src/current/v22.1/backup-architecture.md +++ /dev/null @@ -1,100 +0,0 @@ ---- -title: Backup Architecture -summary: Learn about how CockroachDB processes backup jobs. -toc: true -docs_area: manage ---- - -CockroachDB backups operate as _jobs_, which are potentially long-running operations that could span multiple SQL sessions. Unlike regular SQL statements, which CockroachDB routes to the [optimizer](cost-based-optimizer.html) for processing, a [`BACKUP`](backup.html) statement will move into a job workflow. A backup job has four main phases: - -1. [Job creation](#job-creation-phase) -1. [Resolution](#resolution-phase) -1. [Export data](#export-phase) -1. [Metadata writing](#metadata-writing-phase) - -The [Overview](#overview) section that follows provides an outline of a backup job's process. For a more detailed explanation of how a backup job works, read from the [Job creation phase](#job-creation-phase) section. - -## Overview - -At a high level, CockroachDB performs the following tasks when running a backup job: - -1. Validates the parameters of the `BACKUP` statement then writes them to a job in the jobs system describing the backup. -1. Based on the parameters recorded in the job, and any previous backups found in the storage location, determines which key spans and time ranges of data in the [storage layer](architecture/storage-layer.html) need to be backed up. -1. Instructs various nodes in the cluster to each read those keys and writes the row data to the backup storage location. -1. Records any additional metadata about what was backed up to the backup storage location. - -The following diagram illustrates the flow from `BACKUP` statement through to a complete backup in cloud storage: - -A flow diagram representing the process of a backup job from statement through to backup data stored. - -## Job creation phase - -A backup begins by validating the general sense of the proposed backup. - -Let's take the following statement to start the backup process: - -~~~ sql -BACKUP DATABASE movr INTO 's3://bucket' WITH revision_history; -~~~ - -This statement is a request for a [full backup](take-full-and-incremental-backups.html#full-backups) of the `movr` database with the `revision_history` [option](backup.html#options). - -CockroachDB will verify the options passed in the `BACKUP` statement and check that the user has the [required privileges](backup.html#required-privileges) to take the backup. The tables are identified along with the set of key spans to back up. In this example, the backup will identify all the tables within the `movr` database and note that the `revision_history` option is required. - -The ultimate aim of the job creation phase is to complete all of these checks and write the detail of what the backup job should complete to a _job record_. - -If a [`detached`](backup.html#detached) backup was requested, the `BACKUP` statement is complete as it has created an uncommitted, but otherwise ready-to-run backup job. You'll find the job ID returned as output. Without the `detached` option, the job is committed and the statement waits to return the results until the backup job starts, runs (as described in the following sections), and terminates. - -Once the job record is committed, the cluster will try to run the backup job even if a client disconnects or the node handling the `BACKUP` statement terminates. From this point, the backup is a persisted job that any node in the cluster can take over executing to ensure it runs. The job record will move to the system jobs table, ready for a node to claim it. - -## Resolution phase - -Once one of the nodes has claimed the job from the system jobs table, it will take the job record’s information and outline a plan. This node becomes the _coordinator_. In our example, **Node 2** becomes the coordinator and starts to complete the following to prepare and resolve the targets for distributed backup work: - -- Test the connection to the storage bucket URL (`'s3://bucket'`). -- Determine the specific subdirectory for this backup, including if it should be incremental from any discovered existing directories. -- Calculate the keys of the backup data, as well as the time ranges if the backup is incremental. -- Determine the [leaseholder](architecture/overview.html#architecture-leaseholder) nodes for the keys to back up. -- Provide a plan to the nodes that will execute the data export (typically the leaseholder node). - -To map out the storage location's directory to which the nodes will write the data, the coordinator identifies the [type](backup-and-restore-overview.html#backup-and-restore-product-support) of backup. This determines the name of the new (or edited) directory to store the backup files in. For example, if there is an existing full backup in the target storage location, the upcoming backup will be [incremental](take-full-and-incremental-backups.html#incremental-backups) and therefore append to the full backup after any existing incremental layers discovered in it. - -For more information on how CockroachDB structures backups in storage, see [Backup collections](take-full-and-incremental-backups.html#backup-collections). - -### Key and time range resolution - -In this part of the resolution phase, the coordinator will calculate all the necessary spans of keys and their time ranges that the cluster needs to export for this backup. It divides the key spans based on which node is the [leaseholder](architecture/overview.html#architecture-leaseholder) of the range for that key span. Every node has a SQL processor on it to process the backup plan that the coordinator will pass to it. Typically, it is the backup SQL processor on the leaseholder node for the key span that will complete the export work. - -Each of the node's backup SQL processors are responsible for: - -1. Asking the [storage layer](architecture/storage-layer.html) for the content of each key span. -1. Receiving the content from the storage layer. -1. Writing it to the backup storage location or [locality-specific location](take-and-restore-locality-aware-backups.html) (whichever locality best matches the node). - -Since any node in a cluster can become the coordinator and all nodes could be responsible for exporting data during a backup, it is necessary that all nodes can connect to the backup storage location. - -## Export phase - -Once the coordinator has provided a plan to each of the backup SQL processors that specifies the backup data, the distributed export of the backup data begins. - -In the following diagram, **Node 2** and **Node 3** contain the leaseholders for the **R1** and **R2** [ranges](architecture/overview.html#architecture-range). Therefore, in this example backup job, the backup data will be exported from these nodes to the specified storage location. - -While processing, the nodes emit progress data that tracks their backup work to the coordinator. In the diagram, **Node 3** will send progress data to **Node 2**. The coordinator node will then aggregate the progress data into checkpoint files in the storage bucket. The checkpoint files provide a marker for the backup to resume after a retryable state, such as when it has been [paused](pause-job.html). - -Three-node cluster exporting backup data from the leaseholders - -## Metadata writing phase - -Once each of the nodes have fully completed their data export work, the coordinator will conclude the backup job by writing the backup metadata files. In the diagram, **Node 2** is exporting the backup data for **R1** as that range's leaseholder, but this node also exports the backup's metadata as the coordinator. - -The backup metadata files describe everything a backup contains. That is, all the information a [restore](restore.html) job will need to complete successfully. A backup without metadata files would indicate that the backup did not complete properly and would not be restorable. - -With the full backup complete, the specified storage location will contain the backup data and its metadata ready for a potential [restore](restore.html). After subsequent backups of the `movr` database to this storage location, CockroachDB will create a _backup collection_. See [Backup collections](take-full-and-incremental-backups.html#backup-collections) for information on how CockroachDB structures a collection of multiple backups. - -## See also - -- CockroachDB's general [Architecture Overview](architecture/overview.html) -- [Storage layer](architecture/storage-layer.html) -- [Take Full and Incremental Backups](take-full-and-incremental-backups.html) -- [`BACKUP`](backup.html) -- [`RESTORE`](restore.html) diff --git a/src/current/v22.1/backup.md b/src/current/v22.1/backup.md deleted file mode 100644 index fc3eb5df5dc..00000000000 --- a/src/current/v22.1/backup.md +++ /dev/null @@ -1,345 +0,0 @@ ---- -title: BACKUP -summary: Back up your CockroachDB cluster to a cloud storage services such as AWS S3, Google Cloud Storage, or other NFS. -toc: true -docs_area: reference.sql ---- - -CockroachDB's `BACKUP` [statement](sql-statements.html) allows you to create [full or incremental backups](take-full-and-incremental-backups.html) of your cluster's schema and data that are consistent as of a given timestamp. - -You can [backup a full cluster](#backup-a-cluster), which includes: - -- Relevant system tables -- All [databases](create-database.html) -- All [tables](create-table.html) (which automatically includes their [indexes](indexes.html)) -- All [views](views.html) -- All [scheduled jobs](manage-a-backup-schedule.html#view-and-control-a-backup-initiated-by-a-schedule) - -You can also backup: - -- [An individual database](#backup-a-database), which includes all of its tables and views. -- [An individual table](#backup-a-table-or-view), which includes its indexes and views. - - `BACKUP` only backs up entire tables; it **does not** support backing up subsets of a table. - -Because CockroachDB is designed with high fault tolerance, these backups are designed primarily for disaster recovery (i.e., if your cluster loses a majority of its nodes) through [`RESTORE`](restore.html). Isolated issues (such as small-scale node outages) do not require any intervention. - -To view the contents of an backup created with the `BACKUP` statement, use [`SHOW BACKUP`](show-backup.html). - -{% include {{ page.version.version }}/backups/backup-to-deprec.md %} - -## Considerations - -- Core users can only take [full backups](take-full-and-incremental-backups.html#full-backups). To use the other backup features, you need an [Enterprise license](enterprise-licensing.html). You can also use [CockroachDB {{ site.data.products.dedicated }}](https://cockroachlabs.cloud/signup?referralId=docs-crdb-backup), which runs [full backups daily and incremental backups hourly](../cockroachcloud/managed-backups.html). -- Backups will export [Enterprise license keys](enterprise-licensing.html) during a [full cluster backup](#backup-a-cluster). When you [restore](restore.html) a full cluster with an Enterprise license, it will restore the Enterprise license of the cluster you are restoring from. -- [Zone configurations](configure-zone.html) present on the destination cluster prior to a restore will be **overwritten** during a [cluster restore](restore.html#full-cluster) with the zone configurations from the [backed up cluster](#backup-a-cluster). If there were no customized zone configurations on the cluster when the backup was taken, then after the restore the destination cluster will use the zone configuration from the [`RANGE DEFAULT` configuration](configure-replication-zones.html#view-the-default-replication-zone). -- You cannot restore a backup of a multi-region database into a single-region database. -- Exclude a table's row data from a backup using the [`exclude_data_from_backup`](take-full-and-incremental-backups.html#exclude-a-tables-data-from-backups) parameter. -- `BACKUP` is a blocking statement. To run a backup job asynchronously, use the `DETACHED` option. See the [options](#options) below. - -### Storage considerations - -- [HTTP storage](use-a-local-file-server-for-bulk-operations.html) is not supported for `BACKUP` and `RESTORE`. -- Modifying backup files in the storage location could invalidate a backup, and therefore, prevent a restore. In v22.1 and later, **we recommend enabling [object locking](use-cloud-storage-for-bulk-operations.html#object-locking) in your cloud storage bucket.** -- While Cockroach Labs actively tests Amazon S3, Google Cloud Storage, and Azure Storage, we **do not** test [S3-compatible services](use-cloud-storage-for-bulk-operations.html) (e.g., [MinIO](https://min.io/), [Red Hat Ceph](https://docs.ceph.com/en/pacific/radosgw/s3/)). - -## Required privileges - -- [Full cluster backups](take-full-and-incremental-backups.html#full-backups) can only be run by members of the [`admin` role](security-reference/authorization.html#admin-role). By default, the `root` user belongs to the `admin` role. -- For all other backups, the user must have [read access](security-reference/authorization.html#managing-privileges) on all objects being backed up. Database backups require `CONNECT` privileges, and table backups require `SELECT` privileges. Backups of user-defined schemas, or backups containing user-defined types, require `USAGE` privileges. -- `BACKUP` requires full read and write permissions to its target destination. -- {% include_cached new-in.html version="v22.1" %}`BACKUP` does **not** require delete or overwrite permissions to its target destination. This allows `BACKUP` to write to cloud storage buckets that have [object locking](use-cloud-storage-for-bulk-operations.html#object-locking) configured. We recommend enabling object locking in cloud storage buckets to protect the validity of a backup. - -### Destination privileges - -{% include {{ page.version.version }}/backups/destination-file-privileges.md %} - -{% include {{ page.version.version }}/misc/s3-compatible-warning.md %} - -## Synopsis - -
                              -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/backup.html %} -
                              - -## Parameters - -CockroachDB stores full backups in a backup collection. Each full backup in a collection may also have incremental backups. For more detail on this, see [Backup collections](take-full-and-incremental-backups.html#backup-collections). - - Parameter | Description ------------+------------- -`targets` | Back up the listed [targets](#targets). -`subdirectory` | The name of the specific backup (e.g., `2021/03/23-213101.37`) in the collection to which you want to add an [incremental backup](take-full-and-incremental-backups.html#incremental-backups). To view available backup subdirectories, use [`SHOW BACKUPS IN collectionURI`](show-backup.html). If the backup `subdirectory` is not provided, incremental backups will be stored in the default `/incrementals` directory at the root of the collection URI. See the [Create incremental backups](#create-incremental-backups) example.

                              **Warning:** If you use an arbitrary `STRING` as the subdirectory, a new full backup will be created, but it will never be shown in `SHOW BACKUPS IN`. We do not recommend using arbitrary strings as subdirectory names. -`LATEST` | Append an incremental backup to the latest completed full backup's subdirectory. - `collectionURI` | The URI where you want to store the backup. (Or, the default locality for a locality-aware backup.)

                              For information about this URL structure, see [Backup File URLs](#backup-file-urls). -`localityURI` | The URI containing the `COCKROACH_LOCALITY` parameter for a non-default locality that is part of a single locality-aware backup. -`timestamp` | Back up data as it existed as of [`timestamp`](as-of-system-time.html). The `timestamp` must be more recent than your data's garbage collection TTL (which is controlled by the [`gc.ttlseconds` replication zone variable](configure-replication-zones.html#gc-ttlseconds)). -`backup_options` | Control the backup behavior with a comma-separated list of [these options](#options). - -### Targets - -Target | Description ------------------------------------+------------------------------------------------------------------------- -N/A | Backup the cluster. For an example of a full cluster backup, [see Backup a cluster](#backup-a-cluster). -`DATABASE {database_name} [, ...]` | The name of the database(s) you want to backup (i.e., create backups of all tables and views in the database). For an example of backing up a database, see [Backup a database](#backup-a-database). -`TABLE {table_name} [, ...]` | The name of the table(s) or [view(s)](views.html) you want to backup. For an example of backing up a table or view, see [Backup a table or view](#backup-a-table-or-view). - -### Options - -{% include {{ page.version.version }}/backups/backup-options.md %} - -### Backup file URLs - -CockroachDB uses the URL provided to construct a secure API call to the service you specify. The URL structure depends on the type of file storage you are using. For more information, see the following: - -- [URL format](use-cloud-storage-for-bulk-operations.html#url-format) -- [Example file URLs](use-cloud-storage-for-bulk-operations.html#example-file-urls) -- [Authentication parameters](use-cloud-storage-for-bulk-operations.html#authentication) - -{{site.data.alerts.callout_success}} -Backups support cloud object locking and [Amazon S3 storage classes](#back-up-with-an-s3-storage-class). For more detail, see [Additional cloud storage feature support](use-cloud-storage-for-bulk-operations.html#additional-cloud-storage-feature-support). -{{site.data.alerts.end}} - -## Functional details - -### Object dependencies - -Dependent objects must be backed up at the same time as the objects they depend on. - -Object | Depends On --------|----------- -Table with [foreign key](foreign-key.html) constraints | The table it `REFERENCES`; however, this dependency can be [removed during the restore](restore.html#skip_missing_foreign_keys). -Table with a [sequence](create-sequence.html) | The sequence it uses; however, this dependency can be [removed during the restore](restore.html#skip_missing_sequences). -[Views](views.html) | The tables used in the view's `SELECT` statement. - -{{site.data.alerts.callout_info}} -To exclude a table's row data from a backup, use the `exclude_data_from_backup` parameter with [`CREATE TABLE`](create-table.html#create-a-table-with-data-excluded-from-backup) or [`ALTER TABLE`](set-storage-parameter.html#exclude-a-tables-data-from-backups). - -For more detail, see the [Exclude a table's data from backups](take-full-and-incremental-backups.html#exclude-a-tables-data-from-backups) example. -{{site.data.alerts.end}} - -### Users and privileges - -The `system.users` table stores your users and their passwords. To restore your users and privilege [grants](grant.html), do a cluster backup and restore the cluster to a fresh cluster with no user data. You can also backup the `system.users` table, and then use [this procedure](restore.html#restoring-users-from-system-users-backup). - -## Performance - -The `BACKUP` process minimizes its impact to the cluster's performance by distributing work to all nodes. Each node backs up only a specific subset of the data it stores (those for which it serves writes), with no two nodes backing up the same data. - -`BACKUP`, like any read, cannot export a range if the range contains an [unresolved intent](architecture/transaction-layer.html#resolving-write-intents). While you typically will want bulk, background jobs like `BACKUP` to have as little impact on your foreground traffic as possible, it's more important for backups to actually complete (which maintains your [recovery point objective (RPO)](https://en.wikipedia.org/wiki/Disaster_recovery#Recovery_Point_Objective)). Unlike a normal read transaction that will block until any uncommitted writes it encounters are resolved, `BACKUP` will block only for a configurable duration before invoking priority to ensure it can complete on-time. - -We recommend always starting backups with a specific [timestamp](timestamp.html) at least 10 seconds in the past. For example: - -~~~ sql -> BACKUP...AS OF SYSTEM TIME '-10s'; -~~~ - -This improves performance by decreasing the likelihood that the `BACKUP` will be [retried because it contends with other statements/transactions](transactions.html#transaction-retries). However, because `AS OF SYSTEM TIME` returns historical data, your reads might be stale. Taking backups with `AS OF SYSTEM TIME '-10s'` is a good best practice to reduce the number of still-running transactions you may encounter, since the backup will take priority and will force still-running transactions to restart after the backup is finished. - -`BACKUP` will initially ask individual ranges to backup but to skip if they encounter an intent. Any range that is skipped is placed at the end of the queue. When `BACKUP` has completed its initial pass and is revisiting ranges, it will ask any range that did not resolve within the given time limit (default 1 minute) to attempt to resolve any intents that it encounters and to **not** skip. Additionally, the backup's transaction priority is then set to `high`, which causes other transactions to abort until the intents are resolved and the backup is finished. - -{% include {{ page.version.version }}/backups/retry-failure.md %} - -### Backup performance configuration - -Cluster settings provide a means to tune a CockroachDB cluster. The following cluster settings are helpful for configuring backup files and performance: - -#### `bulkio.backup.file_size` - -Set a target for the amount of backup data written to each backup file. This is the maximum target size the backup will reach, but it is possible files of a smaller size are created during the backup job. - -Note that if you lower `bulkio.backup.file_size` below the default, it will cause the backup job to create many small SST files, which could impact a restore job’s performance because it will need to keep track of so many small files. - -**Default:** `128 MiB` - -#### `cloudstorage.azure.concurrent_upload_buffers` - -Improve the speed of backups to Azure Storage by increasing `cloudstorage.azure.concurrent_upload_buffers` to `3`. This setting configures the number of concurrent buffers that are used during file uploads to Azure Storage. Note that the higher this setting the more data that is held in memory, which can increase the risk of OOMs if there is not sufficient memory on each node. - -**Default:** `1` - -For a complete list, including all cluster settings related to backups, see the [Cluster Settings](cluster-settings.html) page. - -## Viewing and controlling backups jobs - -After CockroachDB successfully initiates a backup, it registers the backup as a job, and you can do the following: - - Action | SQL Statement ------------------------+----------------- -View the backup status | [`SHOW JOBS`](show-jobs.html) -Pause the backup | [`PAUSE JOB`](pause-job.html) -Resume the backup | [`RESUME JOB`](resume-job.html) -Cancel the backup | [`CANCEL JOB`](cancel-job.html) - -You can also visit the [**Jobs** page](ui-jobs-page.html) of the DB Console to view job details. The `BACKUP` statement will return when the backup is finished or if it encounters an error. - -{{site.data.alerts.callout_info}} -The presence of the `BACKUP MANIFEST` file in the backup subdirectory is an indicator that the backup job completed successfully. -{{site.data.alerts.end}} - -## Examples - -Per our guidance in the [Performance](#performance) section, we recommend starting backups from a time at least 10 seconds in the past using [`AS OF SYSTEM TIME`](as-of-system-time.html). - -{% include {{ page.version.version }}/backups/bulk-auth-options.md %} - -{{site.data.alerts.callout_info}} -The `BACKUP ... TO` syntax is **deprecated** as of v22.1 and will be removed in a future release. - -We recommend using the `BACKUP ... INTO {collectionURI}` syntax as per the following examples. -{{site.data.alerts.end}} - -### Backup a cluster - -To take a [full backup](take-full-and-incremental-backups.html#full-backups) of a cluster: - -{% include_cached copy-clipboard.html %} -~~~ sql -> BACKUP INTO \ -'s3://{BUCKET NAME}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' \ -AS OF SYSTEM TIME '-10s'; -~~~ - -### Backup a database - -To take a [full backup](take-full-and-incremental-backups.html#full-backups) of a single database: - -{% include_cached copy-clipboard.html %} -~~~ sql -> BACKUP DATABASE bank \ -INTO 's3://{BUCKET NAME}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' \ -AS OF SYSTEM TIME '-10s'; -~~~ - -To take a [full backup](take-full-and-incremental-backups.html#full-backups) of multiple databases: - -{% include_cached copy-clipboard.html %} -~~~ sql -> BACKUP DATABASE bank, employees \ -INTO 's3://{BUCKET NAME}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' \ -AS OF SYSTEM TIME '-10s'; -~~~ - -### Backup a table or view - -To take a [full backup](take-full-and-incremental-backups.html#full-backups) of a single table or view: - -{% include_cached copy-clipboard.html %} -~~~ sql -> BACKUP bank.customers \ -INTO 's3://{BUCKET NAME}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' \ -AS OF SYSTEM TIME '-10s'; -~~~ - -To take a [full backup](take-full-and-incremental-backups.html#full-backups) of multiple tables: - -{% include_cached copy-clipboard.html %} -~~~ sql -> BACKUP bank.customers, bank.accounts \ -INTO 's3://{BUCKET NAME}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' \ -AS OF SYSTEM TIME '-10s'; -~~~ - -### Backup all tables in a schema - - To back up all tables in a [specified schema](create-schema.html), use a wildcard with the schema name: - -{% include_cached copy-clipboard.html %} -~~~ sql -> BACKUP test_schema.* -INTO 's3://{BUCKET NAME}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' \ -AS OF SYSTEM TIME '-10s'; -~~~ - -Alternatively, use a [fully qualified name](sql-name-resolution.html#lookup-with-fully-qualified-names): `database.schema.*`. - -With this syntax, schemas will be resolved before databases. `test_object.*` will resolve to a _schema_ of `test_object` within the set current database before matching to a database of `test_object`. - -If a database and schema have the same name, such as `bank.bank`, running `BACKUP bank.*` will result in the schema resolving first. All the tables within that schema will be backed up. However, if this were to be run from a different database that does not have a `bank` schema, all tables in the `bank` database will be backed up. - -See [Name Resolution](sql-name-resolution.html) for more details on how naming hierarchy and name resolution work in CockroachDB. - -### Create incremental backups - -When a `BACKUP` statement specifies an existing subdirectory in the collection, explicitly or via the `LATEST` keyword, an incremental backup will be added to the default `/incrementals` directory at the root of the [collection](take-full-and-incremental-backups.html#backup-collections) storage location. - -To take an incremental backup using the `LATEST` keyword: - -{% include_cached copy-clipboard.html %} -~~~ sql -> BACKUP INTO LATEST IN \ - 's3://{BUCKET NAME}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' \ - AS OF SYSTEM TIME '-10s'; -~~~ - -To store the backup in an existing subdirectory in the collection: - -{% include_cached copy-clipboard.html %} -~~~ sql -BACKUP INTO {'subdirectory'} IN 's3://{BUCKET NAME}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' \ -AS OF SYSTEM TIME '-10s'; -~~~ - -{{site.data.alerts.callout_info}} -If you intend to take a **full** backup, we recommend running `BACKUP INTO {collectionURI}` without specifying a subdirectory. -{{site.data.alerts.end}} - -To explicitly control where you store your incremental backups, use the [`incremental_location`](backup.html#options) option. For more detail, see [this example](take-full-and-incremental-backups.html#incremental-backups-with-explicitly-specified-destinations) demonstrating the `incremental_location` option. - -### Run a backup asynchronously - -Use the `DETACHED` [option](#options) to execute the backup [job](show-jobs.html) asynchronously: - -{% include_cached copy-clipboard.html %} -~~~ sql -> BACKUP INTO \ -'s3://{BUCKET NAME}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' \ -AS OF SYSTEM TIME '-10s' -WITH DETACHED; -~~~ - -The job ID is returned after the backup [job creation](backup-architecture.html#job-creation-phase) completes: - -~~~ - job_id ----------------------- - 592786066399264769 -(1 row) -~~~ - -**Without** the `DETACHED` option, `BACKUP` will block the SQL connection until the job completes. Once finished, the job status and more detailed job data is returned: - -~~~ -job_id | status | fraction_completed | rows | index_entries | bytes ----------------------+-----------+--------------------+------+---------------+-------- -652471804772712449 | succeeded | 1 | 50 | 0 | 4911 -(1 row) -~~~ - -### Back up with an S3 storage class - -{% include_cached new-in.html version="v22.1" %} To associate your backup objects with a [specific storage class](use-cloud-storage-for-bulk-operations.html#amazon-s3-storage-classes) in your Amazon S3 bucket, use the `S3_STORAGE_CLASS` parameter with the class. For example, the following S3 connection URI specifies the `INTELLIGENT_TIERING` storage class: - -{% include_cached copy-clipboard.html %} -~~~ sql -BACKUP DATABASE movr INTO 's3://{BUCKET NAME}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}&S3_STORAGE_CLASS=INTELLIGENT_TIERING' AS OF SYSTEM TIME '-10s'; -~~~ - -{% include {{ page.version.version }}/misc/storage-classes.md %} - -{% include {{ page.version.version }}/misc/storage-class-glacier-incremental.md %} - -### Advanced examples - -{% include {{ page.version.version }}/backups/advanced-examples-list.md %} - -## See also - -- [Take Full and Incremental Backups](take-full-and-incremental-backups.html) -- [Take and Restore Encrypted Backups](take-and-restore-encrypted-backups.html) -- [Take and Restore Locality-aware Backups](take-and-restore-locality-aware-backups.html) -- [Take Backups with Revision History and Restore from a Point-in-time](take-backups-with-revision-history-and-restore-from-a-point-in-time.html) -- [`SHOW BACKUP`](show-backup.html) -- [`CREATE SCHEDULE FOR BACKUP`](create-schedule-for-backup.html) -- [`RESTORE`](restore.html) -- [Configure Replication Zones](configure-replication-zones.html) diff --git a/src/current/v22.1/begin-transaction.md b/src/current/v22.1/begin-transaction.md deleted file mode 100644 index d071eea8165..00000000000 --- a/src/current/v22.1/begin-transaction.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -title: BEGIN -summary: Initiate a SQL transaction with the BEGIN statement in CockroachDB. -toc: true -docs_area: reference.sql ---- - -The `BEGIN` [statement](sql-statements.html) initiates a [transaction](transactions.html), which either successfully executes all of the statements it contains or none at all. - -{{site.data.alerts.callout_danger}} -When using transactions, your application should include logic to [retry transactions](transactions.html#transaction-retries) that are aborted to break a dependency cycle between concurrent transactions. -{{site.data.alerts.end}} - - -## Synopsis - -
                              -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/begin.html %} -
                              - -## Required privileges - -No [privileges](security-reference/authorization.html#managing-privileges) are required to initiate a transaction. However, privileges are required for each statement within a transaction. - -## Aliases - -In CockroachDB, the following are aliases for the `BEGIN` statement: - -- `BEGIN TRANSACTION` -- `START TRANSACTION` - -## Parameters - - Parameter | Description ------------|------------- -`PRIORITY` | If you do not want the transaction to run with `NORMAL` priority, you can set it to `LOW` or `HIGH`.

                              Transactions with higher priority are less likely to need to be retried.

                              For more information, see [Transactions: Priorities](transactions.html#transaction-priorities).

                              **Default**: `NORMAL` -`READ` | Set the transaction access mode to `READ ONLY` or `READ WRITE`. The current transaction access mode is also exposed as the [session variable](show-vars.html) `transaction_read_only`.

                              **Default**: `READ WRITE` -`AS OF SYSTEM TIME` | Execute the transaction using the database contents "as of" a specified time in the past.

                              The `AS OF SYSTEM TIME` clause can be used only when the transaction is read-only. If the transaction contains any writes, or if the `READ WRITE` mode is specified, an error will be returned.

                              For more information, see [`AS OF SYSTEM TIME`](as-of-system-time.html). -`NOT DEFERRABLE`
                              `DEFERRABLE` | This clause is supported for compatibility with PostgreSQL. `NOT DEFERRABLE` is a no-op and the default behavior for CockroachDB. `DEFERRABLE` returns an `unimplemented` error. - - CockroachDB now only supports `SERIALIZABLE` isolation, so transactions can no longer be meaningfully set to any other `ISOLATION LEVEL`. In previous versions of CockroachDB, you could set transactions to `SNAPSHOT` isolation, but that feature has been removed. - -## Examples - -### Begin a transaction - -#### Use default settings - -Without modifying the `BEGIN` statement, the transaction uses `SERIALIZABLE` isolation and `NORMAL` priority. - -{% include_cached copy-clipboard.html %} -~~~ sql -> BEGIN; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SAVEPOINT cockroach_restart; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> UPDATE products SET inventory = 0 WHERE sku = '8675309'; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO orders (customer, sku, status) VALUES (1001, '8675309', 'new'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> RELEASE SAVEPOINT cockroach_restart; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> COMMIT; -~~~ - -{{site.data.alerts.callout_danger}}This example assumes you're using client-side intervention to handle transaction retries.{{site.data.alerts.end}} - -#### Change priority - -You can set a transaction's priority to `LOW` or `HIGH`. - -{% include_cached copy-clipboard.html %} -~~~ sql -> BEGIN PRIORITY HIGH; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SAVEPOINT cockroach_restart; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> UPDATE products SET inventory = 0 WHERE sku = '8675309'; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO orders (customer, sku, status) VALUES (1001, '8675309', 'new'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> RELEASE SAVEPOINT cockroach_restart; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> COMMIT; -~~~ - -You can also set a transaction's priority with [`SET TRANSACTION`](set-transaction.html). - -{{site.data.alerts.callout_danger}} -This example assumes you're using [client-side intervention to handle transaction retries](transactions.html#client-side-intervention). -{{site.data.alerts.end}} - -### Use the `AS OF SYSTEM TIME` option - -You can execute the transaction using the database contents "as of" a specified time in the past. - -{% include {{ page.version.version }}/sql/begin-transaction-as-of-system-time-example.md %} - -{{site.data.alerts.callout_success}} -You can also use the [`SET TRANSACTION`](set-transaction.html#use-the-as-of-system-time-option) statement inside the transaction to achieve the same results. This syntax is easier to use from [drivers and ORMs](install-client-drivers.html). -{{site.data.alerts.end}} - -### Begin a transaction with automatic retries - -CockroachDB will [automatically retry](transactions.html#transaction-retries) all transactions that contain both `BEGIN` and `COMMIT` in the same batch. Batching is controlled by your driver or client's behavior, but means that CockroachDB receives all of the statements as a single unit, instead of a number of requests. - -From the perspective of CockroachDB, a transaction sent as a batch looks like this: - -{% include_cached copy-clipboard.html %} -~~~ sql -> BEGIN; - -> DELETE FROM customers WHERE id = 1; - -> DELETE orders WHERE customer = 1; - -> COMMIT; -~~~ - -However, in your application's code, batched transactions are often just multiple statements sent at once. For example, in Go, this transaction would sent as a single batch (and automatically retried): - -~~~ go -db.Exec( - "BEGIN; - - DELETE FROM customers WHERE id = 1; - - DELETE orders WHERE customer = 1; - - COMMIT;" -) -~~~ - -Issuing statements this way signals to CockroachDB that you do not need to change any of the statement's values if the transaction doesn't immediately succeed, so it can continually retry the transaction until it's accepted. - -## See also - -- [Transactions](transactions.html) -- [`COMMIT`](commit-transaction.html) -- [`SAVEPOINT`](savepoint.html) -- [`RELEASE SAVEPOINT`](release-savepoint.html) -- [`ROLLBACK`](rollback-transaction.html) diff --git a/src/current/v22.1/bit.md b/src/current/v22.1/bit.md deleted file mode 100644 index d2dc0144136..00000000000 --- a/src/current/v22.1/bit.md +++ /dev/null @@ -1,125 +0,0 @@ ---- -title: BIT -summary: The BIT and BIT VARYING data types stores bit arrays. -toc: true -docs_area: reference.sql ---- - -The `BIT` and `VARBIT` [data types](data-types.html) stores bit arrays. -With `BIT`, the length is fixed; with `VARBIT`, the length can be variable. - -## Aliases - -The name `BIT VARYING` is an alias for `VARBIT`. - -## Syntax - -Bit array constants are expressed as literals. For example, `B'100101'` denotes an array of 6 bits. - -For more information about bit array constants, see the [constants documentation on bit array literals](sql-constants.html#bit-array-literals). - -For usage, see the [Example](#example) below. - -## Size - -The number of bits in a `BIT` value is determined as follows: - -| Type declaration | Logical size | -|------------------|-----------------------------------| -| BIT | 1 bit | -| BIT(N) | N bits | -| VARBIT | variable with no maximum | -| VARBIT(N) | variable with a maximum of N bits | - -The effective size of a `BIT` value is larger than its logical number -of bits by a bounded constant factor. Internally, CockroachDB stores -bit arrays in increments of 64 bits plus an extra integer value to -encode the length. - -The total size of a `BIT` value can be arbitrarily large, but it is -recommended to keep values under 1 MB to ensure performance. Above -that threshold, [write -amplification](https://en.wikipedia.org/wiki/Write_amplification) and -other considerations may cause significant performance degradation. - -## Example - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE b (x BIT, y BIT(3), z VARBIT, w VARBIT(3)); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM b; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden -+-------------+-----------+-------------+----------------+-----------------------+-----------+-----------+ - x | BIT | true | NULL | | {primary} | false - y | BIT(3) | true | NULL | | {primary} | false - z | VARBIT | true | NULL | | {primary} | false - w | VARBIT(3) | true | NULL | | {primary} | false - rowid | INT | false | unique_rowid() | | {primary} | true -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO b(x, y, z, w) VALUES (B'1', B'101', B'1', B'1'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM b; -~~~ - -~~~ - x | y | z | w -+---+-----+---+---+ - 1 | 101 | 1 | 1 -~~~ - -For type `BIT`, the value must match exactly the specified size: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO b(x) VALUES (B'101'); -~~~ - -~~~ -pq: bit string length 3 does not match type BIT -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO b(y) VALUES (B'10'); -~~~ - -~~~ -pq: bit string length 2 does not match type BIT(3) -~~~ - -For type `VARBIT`, the value must not be larger than the specified maximum size: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO b(w) VALUES (B'1010'); -~~~ - -~~~ -pq: bit string length 4 too large for type VARBIT(3) -~~~ - -## Supported casting and conversion - -`BIT` values can be [cast](data-types.html#data-type-conversions-and-casts) to any of the following data types: - -Type | Details ------|--------- -`INT` | Converts the bit array to the corresponding numeric value, interpreting the bits as if the value was encoded using [two's complement](https://en.wikipedia.org/wiki/Two%27s_complement). If the bit array is larger than the integer type, excess bits on the left are ignored. For example, `B'1010'::INT` equals `10`. -`STRING` | Prints out the binary digits as a string. This recovers the literal representation. For example, `B'1010'::STRING` equals `'1010'`. - -## See also - -[Data Types](data-types.html) diff --git a/src/current/v22.1/bool.md b/src/current/v22.1/bool.md deleted file mode 100644 index 50bea18cf4f..00000000000 --- a/src/current/v22.1/bool.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -title: BOOL -summary: The BOOL data type stores Boolean values of false or true. -toc: true -docs_area: reference.sql ---- - -The `BOOL` [data type](data-types.html) stores a Boolean value of `false` or `true`. - - -## Aliases - -In CockroachDB, `BOOLEAN` is an alias for `BOOL`. - -## Syntax - -There are two predefined [named constants](sql-constants.html#named-constants) for `BOOL`: `TRUE` and `FALSE` (the names are case-insensitive). - -Alternately, a boolean value can be obtained by coercing a numeric value: zero is coerced to `FALSE`, and any non-zero value to `TRUE`. - -- `CAST(0 AS BOOL)` (false) -- `CAST(123 AS BOOL)` (true) - -## Size - -A `BOOL` value is 1 byte in width, but the total storage size is likely to be larger due to CockroachDB metadata. - -## Examples - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE bool (a INT PRIMARY KEY, b BOOL, c BOOLEAN); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM bool; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden ---------------+-----------+-------------+----------------+-----------------------+-----------+------------ - a | INT8 | false | NULL | | {primary} | false - b | BOOL | true | NULL | | {primary} | false - c | BOOL | true | NULL | | {primary} | false -(3 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO bool VALUES (12345, true, CAST(0 AS BOOL)); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM bool; -~~~ - -~~~ -+-------+------+-------+ -| a | b | c | -+-------+------+-------+ -| 12345 | true | false | -+-------+------+-------+ -~~~ - -## Supported casting and conversion - -`BOOL` values can be [cast](data-types.html#data-type-conversions-and-casts) to any of the following data types: - -Type | Details ------|-------- -`INT` | Converts `true` to `1`, `false` to `0` -`DECIMAL` | Converts `true` to `1`, `false` to `0` -`FLOAT` | Converts `true` to `1`, `false` to `0` -`STRING` | –– - -## See also - -[Data Types](data-types.html) diff --git a/src/current/v22.1/build-a-csharp-app-with-cockroachdb.md b/src/current/v22.1/build-a-csharp-app-with-cockroachdb.md deleted file mode 100644 index baf864effab..00000000000 --- a/src/current/v22.1/build-a-csharp-app-with-cockroachdb.md +++ /dev/null @@ -1,209 +0,0 @@ ---- -title: Build a C# App with CockroachDB and the .NET Npgsql Driver -summary: Learn how to use CockroachDB from a simple C# (.NET) application with a low-level client driver. -toc: true -twitter: true -referral_id: docs_csharp -docs_area: get_started ---- - -This tutorial shows you how build a simple C# application with CockroachDB and the .NET Npgsql driver. - -We have tested the [.NET Npgsql driver](http://www.npgsql.org/) enough to claim **partial** support. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -## Step 1. Start CockroachDB - -{% include {{page.version.version}}/app/start-cockroachdb.md %} - -## Step 2. Create a .NET project - -In your terminal, run the following commands: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ dotnet new console -o cockroachdb-test-app -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cd cockroachdb-test-app -~~~ - -The `dotnet` command creates a new app of type `console`. The `-o` parameter creates a directory named `cockroachdb-test-app` where your app will be stored and populates it with the required files. The `cd cockroachdb-test-app` command puts you into the newly created app directory. - -## Step 3. Install the Npgsql driver - -Install the latest version of the [Npgsql driver](https://www.nuget.org/packages/Npgsql/) into the .NET project using the built-in nuget package manager: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ dotnet add package Npgsql -~~~ - -## Step 4. Create a database - -{% include {{page.version.version}}/app/create-a-database.md %} - -## Step 5. Run the C# code - -Now that you have set up your project and created a database, in this section you will: - -- [Create a table and insert some rows](#basic-example) -- [Execute a batch of statements as a transaction](#transaction-example-with-retry-logic) - -### Basic example - -#### Get the code - -Replace the contents of the `Program.cs` file that was automatically generated in your `cockroachdb-test-app` directory with the code below: - -{{site.data.alerts.callout_info}} -The following examples use the SSL mode `require` because the .NET Npgsql driver validates certificates differently from other PostgreSQL drivers. For other drivers, we recommend using `verify-full` as a security best practice. -{{site.data.alerts.end}} - -
                              - -{% include_cached copy-clipboard.html %} -~~~ c# -{% remote_include https://raw.githubusercontent.com/cockroachlabs/hello-world-csharp/main/basic.cs %} -~~~ - -
                              - -
                              - -{% include_cached copy-clipboard.html %} -~~~ c# -{% remote_include https://raw.githubusercontent.com/cockroachlabs/hello-world-csharp/cockroachcloud/basic.cs %} -~~~ - -
                              - -#### Update the connection parameters - -
                              - -In a text editor, modify `Program.cs` with the settings to connect to the demo cluster: - -{% include_cached copy-clipboard.html %} -~~~ csharp -connStringBuilder.Host = "{localhost}"; -connStringBuilder.Port = 26257; -connStringBuilder.SslMode = SslMode.Require; -connStringBuilder.Username = "{username}"; -connStringBuilder.Password = "{password}"; -connStringBuilder.Database = "bank"; -connStringBuilder.TrustServerCertificate = true; -~~~ - -Where `{username}` and `{password}` are the database username and password you created earlier. - -
                              - -
                              - -1. In the CockroachDB {{ site.data.products.cloud }} Console, select the **Connection Parameters** tab of the **Connection Info** dialog. - -1. In a text editor, modify the connection parameters in `Program.cs` with the settings to connect to your cluster: - -{% include_cached copy-clipboard.html %} -~~~ csharp -connStringBuilder.Host = "{host-name}"; -connStringBuilder.Port = 26257; -connStringBuilder.SslMode = SslMode.Require; -connStringBuilder.Username = "{username}"; -connStringBuilder.Password = "{password}"; -connStringBuilder.Database = "bank"; -connStringBuilder.RootCertificate = "~/.postgresql/root.crt"; -connStringBuilder.TrustServerCertificate = true; -~~~ - -Where: - -- `{username}` and `{password}` specify the SQL username and password that you created earlier. -- `{host-name}` is the name of the CockroachDB {{ site.data.products.cloud }} host (e.g., `blue-dog-4300.6wr.cockroachlabs.cloud`). - -
                              - -#### Run the code - -Compile and run the code: - -{% include_cached copy-clipboard.html %} -~~~ shell -dotnet run -~~~ - -The output should be: - -~~~ -Initial balances: - account 1: 1000 - account 2: 250 -~~~ - -### Transaction example (with retry logic) - -#### Get the code - -Open `cockroachdb-test-app/Program.cs` again and replace the contents with the code shown below. Make sure to keep the connection parameters the same as in the [previous example](#update-the-connection-parameters). - -
                              - -{% include_cached copy-clipboard.html %} -~~~ c# -{% remote_include https://raw.githubusercontent.com/cockroachlabs/hello-world-csharp/main/transaction.cs %} -~~~ - -
                              - -
                              - -{% include_cached copy-clipboard.html %} -~~~ c# -{% remote_include https://raw.githubusercontent.com/cockroachlabs/hello-world-csharp/cockroachcloud/transaction.cs %} -~~~ - -
                              - -#### Run the code - -This time, running the code will execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ dotnet run -~~~ - -The output should be: - -~~~ -Initial balances: - account 1: 1000 - account 2: 250 -Final balances: - account 1: 900 - account 2: 350 -~~~ - -However, if you want to verify that funds were transferred from one account to another, use the [built-in SQL client](cockroach-sql.html): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --database=bank -e 'SELECT id, balance FROM accounts' -~~~ - -~~~ - id | balance -+----+---------+ - 1 | 900 - 2 | 350 -(2 rows) -~~~ - - -## What's next? - -Read more about using the [.NET Npgsql driver](http://www.npgsql.org/). - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v22.1/build-a-go-app-with-cockroachdb-gorm.md b/src/current/v22.1/build-a-go-app-with-cockroachdb-gorm.md deleted file mode 100644 index deff7d1d87a..00000000000 --- a/src/current/v22.1/build-a-go-app-with-cockroachdb-gorm.md +++ /dev/null @@ -1,134 +0,0 @@ ---- -title: Build a Go App with CockroachDB and GORM -summary: Learn how to use CockroachDB from a simple Go application with the GORM ORM. -toc: true -twitter: false -referral_id: docs_go_gorm -docs_area: get_started ---- - -{% include {{ page.version.version }}/filter-tabs/crud-go.md %} - -This tutorial shows you how build a simple CRUD Go application with CockroachDB and the [GORM ORM](https://gorm.io/index.html). - -{{site.data.alerts.callout_success}} -For another use of GORM with CockroachDB, see our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. -{{site.data.alerts.end}} - -## Step 1. Start CockroachDB - -{% include {{ page.version.version }}/setup/sample-setup.md %} - -## Step 2. Get the code - -Clone the code's GitHub repo: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ git clone https://github.com/cockroachlabs/example-app-go-gorm -~~~ - -The project has the following directory structure: - -~~~ -├── README.md -└── main.go -~~~ - -The `main.go` file defines an `Account` struct that maps to a new `accounts` table. The file also contains some read and write database operations that are executed in the `main` method of the program. - -{% include_cached copy-clipboard.html %} -~~~ go -{% remote_include https://raw.githubusercontent.com/cockroachlabs/example-app-go-gorm/master/main.go %} -~~~ - -{{site.data.alerts.callout_info}} -CockroachDB may require the [client to retry a transaction](transactions.html#transaction-retries) in the case of read/write [contention](performance-best-practices-overview.html#transaction-contention). The [CockroachDB Go client](https://github.com/cockroachdb/cockroach-go) includes a generic **retry function** (`ExecuteTx()`) that runs inside a transaction and retries it as needed. The code sample shows how you can use this function to wrap SQL statements. -{{site.data.alerts.end}} - -## Step 3. Initialize the database - -1. Navigate to the `example-app-go-gorm` directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cd example-app-go-gorm - ~~~ - -1. Set the `DATABASE_URL` environment variable to the connection string for your cluster: - -
                              - - {% include_cached copy-clipboard.html %} - ~~~ shell - export DATABASE_URL="postgresql://root@localhost:26257?sslmode=disable" - ~~~ - -
                              - -
                              - - {% include_cached copy-clipboard.html %} - ~~~ shell - export DATABASE_URL="{connection-string}" - ~~~ - - Where `{connection-string}` is the connection string you copied earlier. - -## Step 4. Run the code - -1. Initialize the module: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cd example-app-go-gorm - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ go mod init basic-sample && go mod tidy - ~~~ - -1. Run the code: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ go run main.go - ~~~ - - The output should look similar to the following: - - ~~~ - 2021/09/16 14:17:12 Creating 5 new accounts... - 2021/09/16 14:17:12 Accounts created. - Balance at '2021-09-16 14:17:12.68843 -0400 EDT m=+2.760587790': - 1580d2f4-c9ec-4f26-bbe7-6a53e9aa5170 1947 - 26ddc77b-8068-409b-b305-0c5d873f7c43 7987 - 3d97ea5a-5108-4388-88e8-92524d5de5e8 4159 - af49831d-d637-4a20-a9a7-01e9fe4628fe 8181 - f0cc97ef-e3fe-4abb-a44a-0dd04207f7d4 2181 - 2021/09/16 14:17:12 Transferring 100 from account af49831d-d637-4a20-a9a7-01e9fe4628fe to account 3d97ea5a-5108-4388-88e8-92524d5de5e8... - 2021/09/16 14:17:12 Funds transferred. - Balance at '2021-09-16 14:17:12.759686 -0400 EDT m=+2.831841311': - 1580d2f4-c9ec-4f26-bbe7-6a53e9aa5170 1947 - 26ddc77b-8068-409b-b305-0c5d873f7c43 7987 - 3d97ea5a-5108-4388-88e8-92524d5de5e8 4259 - af49831d-d637-4a20-a9a7-01e9fe4628fe 8081 - f0cc97ef-e3fe-4abb-a44a-0dd04207f7d4 2181 - 2021/09/16 14:17:12 Deleting accounts created... - 2021/09/16 14:17:12 Accounts deleted. - ~~~ - - The code runs a migration that creates the `accounts` table in the `bank` database, based on the `Account` struct defined at the top of the `main.go` file. - - As shown in the output, the code also does the following: - - Inserts some rows into the `accounts` table. - - Reads values from the table. - - Updates values in the table. - - Deletes values from the table. - -## What's next? - -Read more about using the [GORM ORM](http://gorm.io), or check out a more realistic implementation of GORM with CockroachDB in our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v22.1/build-a-go-app-with-cockroachdb-pq.md b/src/current/v22.1/build-a-go-app-with-cockroachdb-pq.md deleted file mode 100644 index deaa52ed88b..00000000000 --- a/src/current/v22.1/build-a-go-app-with-cockroachdb-pq.md +++ /dev/null @@ -1,97 +0,0 @@ ---- -title: Build a Go App with CockroachDB the Go pq Driver -summary: Learn how to use CockroachDB from a simple Go application with the Go pq driver. -toc: true -twitter: false -referral_id: docs_go_pq -docs_area: get_started ---- - -{% include {{ page.version.version }}/filter-tabs/crud-go.md %} - -This tutorial shows you how build a simple Go application with CockroachDB and the Go [pq driver](https://github.com/lib/pq). - -## Step 1. Start CockroachDB - -{% include {{ page.version.version }}/setup/sample-setup.md %} - -## Step 2. Get the code - -Clone the code's GitHub repo: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ git clone https://github.com/cockroachlabs/hello-world-go-pq -~~~ - -## Step 3. Initialize the database - -1. Navigate to the `hello-world-go-pq` directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cd hello-world-go-pq - ~~~ - -1. Set the `DATABASE_URL` environment variable to the connection string for your cluster: - -
                              - - {% include_cached copy-clipboard.html %} - ~~~ shell - export DATABASE_URL="postgresql://root@localhost:26257?sslmode=disable" - ~~~ - -
                              - -
                              - - {% include_cached copy-clipboard.html %} - ~~~ shell - export DATABASE_URL="{connection-string}" - ~~~ - - Where `{connection-string}` is the connection string you copied earlier. - -## Step 4. Run the code - -You can now run the code sample (`main.go`) provided in this tutorial to do the following: - -- Create a table. -- Insert some rows into the table you created. -- Read values from the table. -- Execute a batch of statements as an atomic [transaction](transactions.html). - - Note that CockroachDB may require the [client to retry a transaction](transactions.html#transaction-retries) in the case of read/write [contention](performance-best-practices-overview.html#transaction-contention). The [CockroachDB Go client](https://github.com/cockroachdb/cockroach-go) includes a generic **retry function** (`ExecuteTx()`) that runs inside a transaction and retries it as needed. The code sample shows how you can use this function to wrap SQL statements. - -1. Initialize the module: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ go mod init basic-sample && go mod tidy - ~~~ - -1. Run the code: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ go run main.go - ~~~ - - The output should be: - - ~~~ - Balances: - 1 1000 - 2 250 - Success - Balances: - 1 900 - 2 350 - ~~~ - -## What's next? - -Read more about using the [Go pq driver](https://godoc.org/github.com/lib/pq). - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v22.1/build-a-go-app-with-cockroachdb-upperdb.md b/src/current/v22.1/build-a-go-app-with-cockroachdb-upperdb.md deleted file mode 100644 index da67a01d9b5..00000000000 --- a/src/current/v22.1/build-a-go-app-with-cockroachdb-upperdb.md +++ /dev/null @@ -1,159 +0,0 @@ ---- -title: Build a Go App with CockroachDB and upper/db -summary: Learn how to use CockroachDB from a simple Go application with the upper/db data access layer. -toc: true -twitter: false -docs_area: get_started ---- - -{% include {{ page.version.version }}/filter-tabs/crud-go.md %} - -This tutorial shows you how build a simple Go application with CockroachDB and the [upper/db](https://upper.io/) data access layer. - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -
                              - -## Step 1. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 2. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -The code samples will run with `maxroach` as the user. - -## Step 3. Run the Go code - -The sample code shown below uses upper/db to map Go-specific objects to SQL operations. Specifically, the code: - -- Creates the `accounts` table, if it does not already exist. -- Deletes any existing rows in the `accounts` table. -- Inserts two rows into the `accounts` table. -- Prints the rows in the `accounts` table to the terminal. -- Deletes the first row in the `accounts` table. -- Updates the rows in the `accounts` table within an explicit [transaction](transactions.html). -- Prints the rows in the `accounts` table to the terminal once more. - -{% include_cached copy-clipboard.html %} -~~~ go -{% include {{ page.version.version }}/app/upperdb-basic-sample/main.go %} -~~~ - -Note that the sample code also includes a function that simulates a transaction error (`crdbForceRetry()`). Upper/db's CockroachDB adapter [automatically retries transactions](transactions.html#client-side-intervention) when transaction errors are thrown. As a result, this function forces a transaction retry. - -To run the code, copy the sample above, or download it directly. - -{{site.data.alerts.callout_success}} -To clone a version of the code below that connects to insecure clusters, run the following command: - -`git clone https://github.com/cockroachlabs/hello-world-go-upperdb/` - -Note that you will need to edit the connection string to use the certificates that you generated when you set up your secure cluster. -{{site.data.alerts.end}} - -
                              - -
                              - -## Step 1. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 2. Run the Go code - -The sample code shown below uses upper/db to map Go-specific objects to SQL operations. Specifically, the code: - -- Creates the `accounts` table, if it does not already exist. -- Deletes any existing rows in the `accounts` table. -- Inserts two rows into the `accounts` table. -- Prints the rows in the `accounts` table to the terminal. -- Deletes the first row in the `accounts` table. -- Updates the rows in the `accounts` table within an explicit [transaction](transactions.html). -- Prints the rows in the `accounts` table to the terminal once more. - -{% include_cached copy-clipboard.html %} -~~~ go -{% include {{ page.version.version }}/app/insecure/upperdb-basic-sample/main.go %} -~~~ - -Note that the sample code also includes a function that simulates a transaction error (`crdbForceRetry()`). Upper/db's CockroachDB adapter [automatically retries transactions](transactions.html#client-side-intervention) when transaction errors are thrown. As a result, this function forces a transaction retry. - -Copy the code or download it directly. - -{{site.data.alerts.callout_success}} -To clone a version of the code below that connects to insecure clusters, run the following command: - -`git clone https://github.com/cockroachlabs/hello-world-go-upperdb/` -{{site.data.alerts.end}} - -
                              - -Change to the directory where you cloned the repo and get the dependencies with `go mod init`: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ go mod init hello-world-go-upperdb -~~~ - -Then run the code: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ go run main.go -~~~ - -The output should look similar to the following: - -~~~ -go: finding module for package github.com/upper/db/v4 -go: finding module for package github.com/upper/db/v4/adapter/cockroachdb -go: found github.com/upper/db/v4 in github.com/upper/db/v4 v4.0.0 -2020/09/16 10:31:55 Balances: - accounts[590467288222990337]: 1000 - accounts[590467288229576705]: 250 -2020/09/16 10:31:55 Balances: - accounts[590467288222990337]: 500 - accounts[590467288229576705]: 999 -2020/09/16 10:31:55 upper/db: log_level=WARNING file=go/pkg/mod/github.com/upper/db/v4@v4.0.0/internal/sqladapter/session.go:642 - Session ID: 00006 - Transaction ID: 00005 - Query: SELECT crdb_internal.force_retry('1ms'::INTERVAL) - Error: pq: restart transaction: crdb_internal.force_retry(): TransactionRetryWithProtoRefreshError: forced by crdb_internal.force_retry() - Time taken: 0.00171s - Context: context.Background - -2020/09/16 10:31:55 upper/db: log_level=WARNING file=go/pkg/mod/github.com/upper/db/v4@v4.0.0/internal/sqladapter/session.go:642 - Session ID: 00006 - Transaction ID: 00005 - Query: INSERT INTO "accounts" ("balance") VALUES ($1) RETURNING "id" - Arguments: []interface {}{887} - Error: pq: current transaction is aborted, commands ignored until end of transaction block - Time taken: 0.00065s - Context: context.Background - -2020/09/16 10:31:56 Balances: - accounts[590467288229576705]: 999 - accounts[590467288342757377]: 887 - accounts[590467288350064641]: 342 -~~~ - -Note that the forced transaction errors result in errors printed to the terminal, but the transactions are retried until they succeed. - -## What's next? - -Read more about upper/db: - -- [Introduction to upper/db](https://upper.io/v4/getting-started/) -- [The upper/db tour](https://tour.upper.io/) -- [upper/db reference docs](https://pkg.go.dev/github.com/upper/db/v4) - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v22.1/build-a-go-app-with-cockroachdb.md b/src/current/v22.1/build-a-go-app-with-cockroachdb.md deleted file mode 100644 index eaad2fd0518..00000000000 --- a/src/current/v22.1/build-a-go-app-with-cockroachdb.md +++ /dev/null @@ -1,114 +0,0 @@ ---- -title: Build a Simple CRUD Go App with CockroachDB and the Go pgx Driver -summary: Learn how to use CockroachDB from a simple Go application with the Go pgx driver. -toc: true -twitter: false -referral_id: docs_go_pgx -docs_area: get_started ---- - -{% include {{ page.version.version }}/filter-tabs/crud-go.md %} - -This tutorial shows you how build a simple CRUD Go application with CockroachDB and the [Go pgx driver](https://pkg.go.dev/github.com/jackc/pgx). - -## Step 1. Start CockroachDB - -{% include {{ page.version.version }}/setup/sample-setup.md %} - -## Step 2. Get the code - -Clone the code's GitHub repo: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ git clone https://github.com/cockroachlabs/example-app-go-pgx/ -~~~ - -The project has the following directory structure: - -~~~ -├── README.md -├── dbinit.sql -└── main.go -~~~ - -The `dbinit.sql` file initializes the database schema that the application uses: - -{% include_cached copy-clipboard.html %} -~~~ go -{% remote_include https://raw.githubusercontent.com/cockroachlabs/example-app-go-pgx/master/dbinit.sql %} -~~~ - -The `main.go` file contains the code for `INSERT`, `SELECT`, `UPDATE`, and `DELETE` SQL operations. The file also executes the `main` method of the program. - -{% include_cached copy-clipboard.html %} -~~~ go -{% remote_include https://raw.githubusercontent.com/cockroachlabs/example-app-go-pgx/master/main.go %} -~~~ - -{{site.data.alerts.callout_info}} -CockroachDB may require the [client to retry a transaction](transactions.html#transaction-retries) in the case of read/write [contention](performance-best-practices-overview.html#transaction-contention). The [CockroachDB Go client](https://github.com/cockroachdb/cockroach-go) includes a generic **retry function** (`ExecuteTx()`) that runs inside a transaction and retries it as needed. The code sample shows how you can use this function to wrap SQL statements. -{{site.data.alerts.end}} - -## Step 3. Initialize the database - -1. Navigate to the `example-app-go-pgx` directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cd example-app-go-pgx - ~~~ - -{% include {{ page.version.version }}/setup/init-bank-sample.md %} - -## Step 4. Run the code - -1. Initialize the module: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ go mod init basic-sample && go mod tidy - ~~~ - -1. Run the code: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ go run main.go - ~~~ - - The output should look similar to the following: - - ~~~ - 2021/07/20 14:48:02 Creating new rows... - 2021/07/20 14:48:02 New rows created. - 2021/07/20 14:48:02 Initial balances: - 2021/07/20 14:48:02 3a936990-a0c9-45bf-bc24-92e10d91dca9: 300 - 2021/07/20 14:48:02 8d1849dd-9222-4b12-a4ff-94e583b544a8: 250 - 2021/07/20 14:48:02 c6ae8917-d24e-4115-b719-f663dbfb9ffb: 500 - 2021/07/20 14:48:02 d0ce1f5c-e468-4899-8590-2bb6076247f2: 100 - 2021/07/20 14:48:02 Transferring funds from account with ID c6ae8917-d24e-4115-b719-f663dbfb9ffb to account with ID d0ce1f5c-e468-4899-8590-2bb6076247f2... - 2021/07/20 14:48:02 Transfer successful. - 2021/07/20 14:48:02 Balances after transfer: - 2021/07/20 14:48:02 3a936990-a0c9-45bf-bc24-92e10d91dca9: 300 - 2021/07/20 14:48:02 8d1849dd-9222-4b12-a4ff-94e583b544a8: 250 - 2021/07/20 14:48:02 c6ae8917-d24e-4115-b719-f663dbfb9ffb: 400 - 2021/07/20 14:48:02 d0ce1f5c-e468-4899-8590-2bb6076247f2: 200 - 2021/07/20 14:48:02 Deleting rows with IDs 8d1849dd-9222-4b12-a4ff-94e583b544a8 and d0ce1f5c-e468-4899-8590-2bb6076247f2... - 2021/07/20 14:48:02 Rows deleted. - 2021/07/20 14:48:02 Balances after deletion: - 2021/07/20 14:48:02 3a936990-a0c9-45bf-bc24-92e10d91dca9: 300 - 2021/07/20 14:48:02 c6ae8917-d24e-4115-b719-f663dbfb9ffb: 400 - ~~~ - - As shown in the output, the code does the following: - - Inserts some rows into the `accounts` table. - - Reads values from the table. - - Updates values in the table. - - Deletes values from the table. - -## What's next? - -Read more about using the [Go pgx driver](https://pkg.go.dev/github.com/jackc/pgx?tab=doc). - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v22.1/build-a-java-app-with-cockroachdb-hibernate.md b/src/current/v22.1/build-a-java-app-with-cockroachdb-hibernate.md deleted file mode 100644 index 75b994ccfe0..00000000000 --- a/src/current/v22.1/build-a-java-app-with-cockroachdb-hibernate.md +++ /dev/null @@ -1,155 +0,0 @@ ---- -title: Build a Java App with CockroachDB and Hibernate -summary: Learn how to use CockroachDB from a simple Java application with the Hibernate ORM. -toc: true -twitter: false -referral_id: docs_java_hibernate -docs_area: get_started ---- - -{% include {{ page.version.version }}/filter-tabs/crud-java.md %} - -This tutorial shows you how build a simple Java application with CockroachDB and the Hibernate ORM. - -{% include {{page.version.version}}/app/java-version-note.md %} - -{{site.data.alerts.callout_success}} -For a sample app and tutorial that uses Spring Data JPA (Hibernate) and CockroachDB, see [Build a Spring App with CockroachDB and JPA](build-a-spring-app-with-cockroachdb-jpa.html). - -For another use of Hibernate with CockroachDB, see our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. -{{site.data.alerts.end}} - -## Step 1. Start CockroachDB - -{% include {{ page.version.version }}/setup/sample-setup-parameters.md %} - -## Step 2. Get the sample code - -Clone the `example-app-java-hibernate` repo to your machine: - -{% include_cached copy-clipboard.html %} -~~~ shell -git clone https://github.com/cockroachlabs/example-app-java-hibernate/ -~~~ - -{{site.data.alerts.callout_info}} -The version of the CockroachDB Hibernate dialect in `hibernate.cfg.xml` corresponds to a version of CockroachDB. For more information, see [Install Client Drivers: Hibernate](install-client-drivers.html). -{{site.data.alerts.end}} - -## Step 3. Run the code - -The sample code in this tutorial ([`Sample.java`](#code-contents)) uses Hibernate to map Java methods to SQL operations. The code performs the following operations, which roughly correspond to method calls in the `Sample` class: - -1. Creates an `accounts` table as specified by the `Account` mapping class. -1. Inserts rows into the table with the `addAccounts()` method. -1. Transfers money from one account to another with the `transferFunds()` method. -1. Prints out account balances before and after the transfer with the `getAccountBalance()` method. - -In addition, the code shows a pattern for automatically handling [transaction retries](transactions.html#client-side-intervention-example) by wrapping transactions in a higher-order function named `runTransaction()`. It also includes a method for testing the retry handling logic (`Sample.forceRetryLogic()`), which will be run if you set the `FORCE_RETRY` variable to `true`. - -It does all of the above using the practices we recommend for using Hibernate (and the underlying JDBC connection) with CockroachDB, which are listed in the [Recommended Practices](#recommended-practices) section below. - - -The contents of `Sample.java`: - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/cockroachlabs/example-app-java-hibernate/master/src/main/java/com/cockroachlabs/Sample.java %} -~~~ - -
                              - -### Update the connection configuration - -Open `src/main/resources/hibernate.cfg.xml`, and set the `hibernate.connection.url`, `hibernate.connection.username`, and `hibernate.connection.password` properties, using the connection information that you obtained from the {{ site.data.products.cloud }} Console: - -{% include_cached copy-clipboard.html %} -~~~ xml -jdbc:postgresql://{host}:{port}/defaultdb?sslmode=verify-full -{username} -{password} -~~~ - -
                              - -### Run the code - -Compile and run the code using `gradlew`, which will also download the dependencies: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cd example-app-java-hibernate -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ ./gradlew run -~~~ - -Toward the end of the output, you should see: - -~~~ -APP: BEGIN; -APP: addAccounts() --> 1.00 -APP: COMMIT; -APP: BEGIN; -APP: getAccountBalance(1) --> 1000.00 -APP: COMMIT; -APP: BEGIN; -APP: getAccountBalance(2) --> 250.00 -APP: COMMIT; -APP: getAccountBalance(1) --> 1000.00 -APP: getAccountBalance(2) --> 250.00 -APP: BEGIN; -APP: transferFunds(1, 2, 100.00) --> 100.00 -APP: COMMIT; -APP: transferFunds(1, 2, 100.00) --> 100.00 -APP: BEGIN; -APP: getAccountBalance(1) --> 900.00 -APP: COMMIT; -APP: BEGIN; -APP: getAccountBalance(2) --> 350.00 -APP: COMMIT; -APP: getAccountBalance(1) --> 900.00 -APP: getAccountBalance(2) --> 350.00 -~~~ - -## Recommended Practices - -### Generate PKCS8 keys for client authentication - -{% include {{page.version.version}}/app/pkcs8-gen.md %} - -
                              - -{% include cockroachcloud/cc-no-user-certs.md %} - -
                              - -### Use `IMPORT` to read in large data sets - -If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code altogether and use the [`IMPORT`](import.html) statement instead. It is much faster and more efficient than making a series of [`INSERT`s](insert.html) and [`UPDATE`s](update.html). It bypasses the [SQL layer](architecture/sql-layer.html) altogether and writes directly to the [storage layer](architecture/storage-layer.html) of the database. - -For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL](migrate-from-postgres.html). - -For more information about importing data from MySQL, see [Migrate from MySQL](migrate-from-mysql.html). - -### Use `reWriteBatchedInserts` for increased speed - -We strongly recommend setting `reWriteBatchedInserts=true`; we have seen 2-3x performance improvements with it enabled. From [the JDBC connection parameters documentation](https://jdbc.postgresql.org/documentation/use/#connection-parameters): - -> This will change batch inserts from `insert into foo (col1, col2, col3) values (1,2,3)` into `insert into foo (col1, col2, col3) values (1,2,3), (4,5,6)` this provides 2-3x performance improvement - -### Retrieve large data sets in chunks using cursors - -CockroachDB now supports the PostgreSQL wire-protocol cursors for implicit transactions and explicit transactions executed to completion. This means the [PGJDBC driver](https://jdbc.postgresql.org) can use this protocol to stream queries with large result sets. This is much faster than [paginating through results in SQL using `LIMIT .. OFFSET`](pagination.html). - -For instructions showing how to use cursors in your Java code, see [Getting results based on a cursor](https://jdbc.postgresql.org/documentation/query/#getting-results-based-on-a-cursor) from the PGJDBC documentation. - -Note that interleaved execution (partial execution of multiple statements within the same connection and transaction) is not supported when [`Statement.setFetchSize()`](https://docs.oracle.com/javase/8/docs/api/java/sql/Statement.html#setFetchSize-int-) is used. - -## What's next? - -Read more about using the [Hibernate ORM](http://hibernate.org/orm/), or check out a more realistic implementation of Hibernate with CockroachDB in our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v22.1/build-a-java-app-with-cockroachdb-jooq.md b/src/current/v22.1/build-a-java-app-with-cockroachdb-jooq.md deleted file mode 100644 index 7c4dc11cd0b..00000000000 --- a/src/current/v22.1/build-a-java-app-with-cockroachdb-jooq.md +++ /dev/null @@ -1,272 +0,0 @@ ---- -title: Build a Java App with CockroachDB and jOOQ -summary: Learn how to use CockroachDB from a simple Java application with jOOQ. -toc: true -twitter: false -docs_area: get_started ---- - -{% include {{ page.version.version }}/filter-tabs/crud-java.md %} - -This tutorial shows you how build a simple Java application with CockroachDB and [jOOQ](https://www.jooq.org/). - -CockroachDB is supported in jOOQ [Professional and Enterprise editions](https://www.jooq.org/download/#databases). - -{% include {{page.version.version}}/app/java-version-note.md %} - -{{site.data.alerts.callout_success}} -For another use of jOOQ with CockroachDB, see our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. -{{site.data.alerts.end}} - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install Maven - -This tutorial uses the [Maven build tool](https://gradle.org/) to manage application dependencies. - -To install Maven on Mac, run the following command: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ brew install maven -~~~ - -To install Maven on a Debian-based Linux distribution like Ubuntu: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ apt-get install maven -~~~ - -For other ways to install Maven, see [its official documentation](https://maven.apache.org/install.html). - -## Step 2. Install jOOQ - -Download the free trial of jOOQ Professional or Enterprise edition from [jOOQ's website](https://www.jooq.org/download), and unzip the file. - -To install jOOQ to your machine's local Maven repository, run the `maven-install.sh` script included in the jOOQ install folder: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ chmod +x maven-install.sh -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ ./maven-install.sh -~~~ - -
                              - -## Step 3. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 4. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -The [`--also-generate-pkcs8-key` flag](cockroach-cert.html#flag-pkcs8) generates a key in [PKCS#8 format](https://tools.ietf.org/html/rfc5208), which is the standard key encoding format in Java. In this case, the generated PKCS8 key will be named `client.maxroach.key.pk8`. - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key --also-generate-pkcs8-key -~~~ - -## Step 5. Run the Java code - -The code below uses jOOQ to map Java methods to SQL operations. It performs the following steps, some of which correspond to method calls of the `Sample` class. - -1. Inputs the `db.sql` file to the database. `db.sql` includes SQL statements that create an `accounts` table in the `bank` database. -2. Inserts rows into the `accounts` table using `session.save(new Account(int id, int balance))` (see `Sample.addAccounts()`). -3. Transfers money from one account to another, printing out account balances before and after the transfer (see `transferFunds(long fromId, long toId, long amount)`). -4. Prints out account balances before and after the transfer (see `Sample.getAccountBalance(long id)`). - -In addition, the code shows a pattern for automatically handling [transaction retries](transactions.html#client-side-intervention-example) by wrapping transactions in a higher-order function `Sample.runTransaction()`. It also includes a method for testing the retry handling logic (`Sample.forceRetryLogic()`), which will be run if you set the `FORCE_RETRY` variable to `true`. - -To run it: - -1. Download and unzip [jooq-basic-sample.zip](https://github.com/cockroachdb/docs/raw/master/_includes/{{ page.version.version }}/app/jooq-basic-sample/jooq-basic-sample.zip). -2. Open `jooq-basic-sample/src/main/java/com/cockroachlabs/Sample.java`, and edit the connection string passed to `DriverManager.getConnection()` in the `Sample` class's `main()` method so that the certificate paths are fully and correctly specified. -3. Compile and run the code using Maven: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cd jooq-basic-sample - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mvn compile - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mvn exec:java -Dexec.mainClass=com.cockroachlabs.Sample - ~~~ - -Here are the contents of [`Sample.java`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{page.version.version}}/app/jooq-basic-sample/Sample.java), the Java file containing the main `Sample` class: - -{% include_cached copy-clipboard.html %} -~~~ java -{% include {{page.version.version}}/app/jooq-basic-sample/Sample.java %} -~~~ - -Toward the end of the output, you should see: - -~~~ -APP: BEGIN; -APP: addAccounts() --> 1 -APP: COMMIT; -APP: BEGIN; -APP: getAccountBalance(1) --> 1000 -APP: COMMIT; -APP: BEGIN; -APP: getAccountBalance(2) --> 250 -APP: COMMIT; -APP: getAccountBalance(1) --> 1000 -APP: getAccountBalance(2) --> 250 -APP: BEGIN; -APP: transferFunds(1, 2, 100) --> 100 -APP: COMMIT; -APP: transferFunds(1, 2, 100) --> 100 -APP: BEGIN; -APP: getAccountBalance(1) --> 900 -APP: COMMIT; -APP: BEGIN; -APP: getAccountBalance(2) --> 350 -APP: COMMIT; -APP: getAccountBalance(1) --> 900 -APP: getAccountBalance(2) --> 350 -~~~ - -To verify that the account balances were updated successfully, start the [built-in SQL client](cockroach-sql.html): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs -~~~ - -To check the account balances, issue the following statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT id, balance FROM accounts; -~~~ - -~~~ - id | balance -+----+---------+ - 1 | 900 - 2 | 350 - 3 | 314159 -(3 rows) -~~~ - -
                              - -
                              - -## Step 3. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 4. Run the Java code - -The code below uses jOOQ to map Java methods to SQL operations. It performs the following steps, some of which correspond to method calls of the `Sample` class. - -1. Inputs the `db.sql` file to the database. `db.sql` includes SQL statements that create an `accounts` table in the `bank` database. -2. Inserts rows into the `accounts` table using `session.save(new Account(int id, int balance))` (see `Sample.addAccounts()`). -3. Transfers money from one account to another, printing out account balances before and after the transfer (see `transferFunds(long fromId, long toId, long amount)`). -4. Prints out account balances before and after the transfer (see `Sample.getAccountBalance(long id)`). - -In addition, the code shows a pattern for automatically handling [transaction retries](transactions.html#client-side-intervention-example) by wrapping transactions in a higher-order function `Sample.runTransaction()`. It also includes a method for testing the retry handling logic (`Sample.forceRetryLogic()`), which will be run if you set the `FORCE_RETRY` variable to `true`. - -To run it: - -1. Download and unzip [jooq-basic-sample.zip](https://github.com/cockroachdb/docs/raw/master/_includes/{{ page.version.version }}/app/insecure/jooq-basic-sample/jooq-basic-sample.zip). -2. Compile and run the code using Maven: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cd jooq-basic-sample - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mvn compile - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mvn exec:java -Dexec.mainClass=com.cockroachlabs.Sample - ~~~ - -Here are the contents of [`Sample.java`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{page.version.version}}/app/insecure/jooq-basic-sample/Sample.java), the Java file containing the main `Sample` class: - -{% include_cached copy-clipboard.html %} -~~~ java -{% include {{page.version.version}}/app/insecure/jooq-basic-sample/Sample.java %} -~~~ - -Toward the end of the output, you should see: - -~~~ -APP: BEGIN; -APP: addAccounts() --> 1 -APP: COMMIT; -APP: BEGIN; -APP: getAccountBalance(1) --> 1000 -APP: COMMIT; -APP: BEGIN; -APP: getAccountBalance(2) --> 250 -APP: COMMIT; -APP: getAccountBalance(1) --> 1000 -APP: getAccountBalance(2) --> 250 -APP: BEGIN; -APP: transferFunds(1, 2, 100) --> 100 -APP: COMMIT; -APP: transferFunds(1, 2, 100) --> 100 -APP: BEGIN; -APP: getAccountBalance(1) --> 900 -APP: COMMIT; -APP: BEGIN; -APP: getAccountBalance(2) --> 350 -APP: COMMIT; -APP: getAccountBalance(1) --> 900 -APP: getAccountBalance(2) --> 350 -~~~ - -To verify that the account balances were updated successfully, start the [built-in SQL client](cockroach-sql.html): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -To check the account balances, issue the following statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT id, balance FROM accounts; -~~~ - -~~~ - id | balance -+----+---------+ - 1 | 900 - 2 | 350 - 3 | 314159 -(3 rows) -~~~ - -
                              - - -## What's next? - -Read more about using [jOOQ](https://www.jooq.org/), or check out a more realistic implementation of jOOQ with CockroachDB in our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v22.1/build-a-java-app-with-cockroachdb.md b/src/current/v22.1/build-a-java-app-with-cockroachdb.md deleted file mode 100644 index 4cb61e69c9c..00000000000 --- a/src/current/v22.1/build-a-java-app-with-cockroachdb.md +++ /dev/null @@ -1,291 +0,0 @@ ---- -title: Build a Simple CRUD Java App with CockroachDB and JDBC -summary: Learn how to use CockroachDB from a simple Java application with the JDBC driver. -toc: true -twitter: false -referral_id: docs_java_jdbc -docs_area: get_started ---- - -{% include {{ page.version.version }}/filter-tabs/crud-java.md %} - -This tutorial shows you how to build a simple CRUD Java application with CockroachDB and the Java JDBC driver. - -{% include {{page.version.version}}/app/java-version-note.md %} - -{{site.data.alerts.callout_success}} -For a sample app and tutorial that uses Spring Data JDBC and CockroachDB, see [Build a Spring App with CockroachDB and JDBC](build-a-spring-app-with-cockroachdb-jdbc.html). -{{site.data.alerts.end}} - -## Step 1. Start CockroachDB - -{% include {{ page.version.version }}/setup/sample-setup-jdbc.md %} - -## Step 2. Get the code - -Clone the code's GitHub repo: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ git clone https://github.com/cockroachlabs/example-app-java-jdbc/ -~~~ - -The project has the following directory structure: - -~~~ -├── README.md -├── app -│   ├── build.gradle -│   └── src -│   └── main -│   ├── java -│   │   └── com -│   │   └── cockroachlabs -│   │   └── BasicExample.java -│   └── resources -│   └── dbinit.sql -├── gradle -│   └── wrapper -│   ├── gradle-wrapper.jar -│   └── gradle-wrapper.properties -├── gradlew -├── gradlew.bat -└── settings.gradle -~~~ - -The `dbinit.sql` file initializes the database schema that the application uses: - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/cockroachlabs/example-app-java-jdbc/master/app/src/main/resources/dbinit.sql %} -~~~ - -The `BasicExample.java` file contains the code for `INSERT`, `SELECT`, and `UPDATE` SQL operations. The file also contains the `main` method of the program. - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/cockroachlabs/example-app-java-jdbc/master/app/src/main/java/com/cockroachlabs/BasicExample.java %} -~~~ - -The sample app uses JDBC and the [Data Access Object (DAO)](https://en.wikipedia.org/wiki/Data_access_object) pattern to map Java methods to SQL operations. It consists of two classes: - -1. `BasicExample`, which is where the application logic lives. -1. `BasicExampleDAO`, which is used by the application to access the data store (in this case CockroachDB). This class also includes a helper function (`runSql`) that runs SQL statements inside a transaction, [retrying statements](transactions.html#transaction-retries) as needed. - -The `main` method of the app performs the following steps which roughly correspond to method calls in the `BasicExample` class. - -| Step | Method | -|------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------| -| 1. Insert account data using a `Map` that corresponds to the input to `INSERT` on the backend | `BasicExampleDAO.updateAccounts(Map balance)` | -| 2. Transfer money from one account to another, printing out account balances before and after the transfer | `BasicExampleDAO.transferFunds(UUID from, UUID to, BigDecimal amount)` | -| 3. Insert random account data using JDBC's bulk insertion support | `BasicExampleDAO.bulkInsertRandomAccountData()` | -| 4. Print out some account data | `BasicExampleDAO.readAccounts(int limit)` | - -It does all of the above using the practices we recommend for using JDBC with CockroachDB, which are listed in the [Recommended Practices](#recommended-practices) section below. - -## Step 3. Update the connection configuration - -1. Navigate to the `example-app-java-jdbc` directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cd example-app-java-jdbc - ~~~ - -1. Set the `JDBC_DATABASE_URL` environment variable to a JDBC-compatible connection string: - -
                              - - {% include_cached copy-clipboard.html %} - ~~~ shell - export JDBC_DATABASE_URL=jdbc:postgresql://localhost:26257/defaultdb?sslmode=disable&user=root - ~~~ - -
                              - -
                              - - 1. Paste in the command you copied earlier: - - {% include_cached copy-clipboard.html %} - ~~~ shell - export JDBC_DATABASE_URL="{connection-string}" - ~~~ - - Where `{connection-string}` is the JDBC connection string from the command you copied earlier. - -
                              - -
                              - - 1. Use the `cockroach convert-url` command to convert the connection string that you copied from the {{ site.data.products.cloud }} Console to a [valid connection string for JDBC connections](connect-to-the-database.html?filters=java): - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach convert-url --url $DATABASE_URL - ~~~ - - ~~~ - ... - - # Connection URL for JDBC (Java and JVM-based languages): - jdbc:postgresql://{host}:{port}/{database}?password={password}&sslmode=verify-full&user={username} - ~~~ - - 2. Set the `JDBC_DATABASE_URL` environment variable to the JDBC-compatible connection string: - - {% include_cached copy-clipboard.html %} - ~~~ shell - export JDBC_DATABASE_URL="{jdbc-connection-string}" - ~~~ - -
                              - -## Step 4. Run the code - -Compile and run the code: - -{% include_cached copy-clipboard.html %} -~~~ shell -./gradlew run -~~~ - -The output will look like the following: - -~~~ -com.cockroachlabs.BasicExampleDAO.createAccountsTable: - 'CREATE TABLE IF NOT EXISTS accounts (id UUID PRIMARY KEY, balance int8)' - -com.cockroachlabs.BasicExampleDAO.updateAccounts: - 'INSERT INTO accounts (id, balance) VALUES ('b5679853-b968-4206-91ec-68945fa3e716', 250)' - -com.cockroachlabs.BasicExampleDAO.updateAccounts: - 'INSERT INTO accounts (id, balance) VALUES ('d1c41041-6589-4b06-8d7c-b9d6d901727e', 1000)' -BasicExampleDAO.updateAccounts: - => 2 total updated accounts -main: - => Account balances at time '15:09:08.902': - ID 1 => $1000 - ID 2 => $250 - -com.cockroachlabs.BasicExampleDAO.transferFunds: - 'UPSERT INTO accounts (id, balance) VALUES('d99e6bb5-ecd1-48e5-b6b6-47fc9a4bc752', ((SELECT balance FROM accounts WHERE id = 'd99e6bb5-ecd1-48e5-b6b6-47fc9a4bc752') - 100)),('6f0c1f94-509a-47e3-a9ab-6a9e3965945c', ((SELECT balance FROM accounts WHERE id = '6f0c1f94-509a-47e3-a9ab-6a9e3965945c') + 100))' -BasicExampleDAO.transferFunds: - => $100 transferred between accounts d99e6bb5-ecd1-48e5-b6b6-47fc9a4bc752 and 6f0c1f94-509a-47e3-a9ab-6a9e3965945c, 2 rows updated -main: - => Account balances at time '15:09:09.142': - ID 1 => $1000 - ID 2 => $250 - -BasicExampleDAO.bulkInsertRandomAccountData: - 'INSERT INTO accounts (id, balance) VALUES ('b70a0c48-fdf4-42ea-b07a-2fea83d77c7d', '287108674'::numeric)' - => 128 row(s) updated in this batch - -BasicExampleDAO.bulkInsertRandomAccountData: - 'INSERT INTO accounts (id, balance) VALUES ('75a5f894-532a-464d-b37e-a4b9ec1c1db6', '189904311'::numeric)' - => 128 row(s) updated in this batch - -BasicExampleDAO.bulkInsertRandomAccountData: - 'INSERT INTO accounts (id, balance) VALUES ('0803968f-ba07-4ece-82d5-24d4da9fdee9', '832474731'::numeric)' - => 128 row(s) updated in this batch - -BasicExampleDAO.bulkInsertRandomAccountData: - 'INSERT INTO accounts (id, balance) VALUES ('082e634d-4930-41eb-9839-298632a5530a', '665918272'::numeric)' - => 128 row(s) updated in this batch - -BasicExampleDAO.bulkInsertRandomAccountData: - => finished, 512 total rows inserted - -com.cockroachlabs.BasicExampleDAO.readAccounts: - 'SELECT id, balance FROM accounts LIMIT 10' - balance => 424934060 - balance => 62220740 - balance => 454671673 - balance => 556061618 - balance => 450164589 - balance => 996867752 - balance => 55869978 - balance => 747446662 - balance => 175832969 - balance => 181799597 - -BUILD SUCCESSFUL in 8s -3 actionable tasks: 3 executed -~~~ - -## Recommended Practices - -### Generate PKCS8 keys for user authentication - -{% include {{page.version.version}}/app/pkcs8-gen.md %} - -
                              - -{% include cockroachcloud/cc-no-user-certs.md %} - -
                              - -### Use `IMPORT` to read in large data sets - -If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code altogether and use the [`IMPORT`](import.html) statement instead. It is much faster and more efficient than making a series of [`INSERT`s](insert.html) and [`UPDATE`s](update.html). It bypasses the [SQL layer](architecture/sql-layer.html) altogether and writes directly to the [storage layer](architecture/storage-layer.html) of the database. - -For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL](migrate-from-postgres.html). - -For more information about importing data from MySQL, see [Migrate from MySQL](migrate-from-mysql.html). - -### Use `reWriteBatchedInserts` for increased speed - -We strongly recommend setting `reWriteBatchedInserts=true`; we have seen 2-3x performance improvements with it enabled. From [the JDBC connection parameters documentation](https://jdbc.postgresql.org/documentation/use/#connection-parameters): - -> This will change batch inserts from `insert into foo (col1, col2, col3) values (1,2,3)` into `insert into foo (col1, col2, col3) values (1,2,3), (4,5,6)` this provides 2-3x performance improvement - -### Use a batch size of 128 - -PGJDBC's batching support only works with [powers of two](https://github.com/pgjdbc/pgjdbc/blob/7b52b0c9e5b9aa9a9c655bb68f23bf4ec57fd51c/pgjdbc/src/main/java/org/postgresql/jdbc/PgPreparedStatement.java#L1597), and will split batches of other sizes up into multiple sub-batches. This means that a batch of size 128 can be 6x faster than a batch of size 250. - -The code snippet below shows a pattern for using a batch size of 128, and is taken from the longer example above (specifically, the `BasicExampleDAO.bulkInsertRandomAccountData()` method). - -Specifically, it does the following: - -1. Turn off auto-commit so you can manage the transaction lifecycle and thus the size of the batch inserts. -2. Given an overall update size of 500 rows (for example), split it into batches of size 128 and execute each batch in turn. -3. Finally, commit the batches of statements you've just executed. - -{% include_cached copy-clipboard.html %} -~~~ java -int BATCH_SIZE = 128; -connection.setAutoCommit(false); - -try (PreparedStatement pstmt = connection.prepareStatement("INSERT INTO accounts (id, balance) VALUES (?, ?)")) { - for (int i=0; i<=(500/BATCH_SIZE);i++) { - for (int j=0; j %s row(s) updated in this batch\n", count.length); // Verifying 128 rows in the batch - } - connection.commit(); -} -~~~ - -### Retrieve large data sets in chunks using cursors - -CockroachDB now supports the PostgreSQL wire-protocol cursors for implicit transactions and explicit transactions executed to completion. This means the [PGJDBC driver](https://jdbc.postgresql.org) can use this protocol to stream queries with large result sets. This is much faster than [paginating through results in SQL using `LIMIT .. OFFSET`](pagination.html). - -For instructions showing how to use cursors in your Java code, see [Getting results based on a cursor](https://jdbc.postgresql.org/documentation/query/#getting-results-based-on-a-cursor) from the PGJDBC documentation. - -Note that interleaved execution (partial execution of multiple statements within the same connection and transaction) is not supported when [`Statement.setFetchSize()`](https://docs.oracle.com/javase/8/docs/api/java/sql/Statement.html#setFetchSize-int-) is used. - -### Connection pooling - -For guidance on connection pooling, with an example using JDBC and [HikariCP](https://github.com/brettwooldridge/HikariCP), see [Connection Pooling](connection-pooling.html). - -## What's next? - -Read more about using the [Java JDBC driver](https://jdbc.postgresql.org/). - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v22.1/build-a-nodejs-app-with-cockroachdb-knexjs.md b/src/current/v22.1/build-a-nodejs-app-with-cockroachdb-knexjs.md deleted file mode 100644 index 83b9780b378..00000000000 --- a/src/current/v22.1/build-a-nodejs-app-with-cockroachdb-knexjs.md +++ /dev/null @@ -1,88 +0,0 @@ ---- -title: Build a Simple CRUD Node.js App with CockroachDB and Knex.js -summary: Learn how to use CockroachDB from a simple CRUD application that uses the Knex.js query builder. -toc: true -twitter: false -referral_id: docs_node_knexjs -docs_area: get_started ---- - -{% include {{ page.version.version }}/filter-tabs/crud-js.md %} - -This tutorial shows you how build a simple Node.js application with CockroachDB and [Knex.js](https://knexjs.org/). - -## Step 1. Start CockroachDB - -{% include {{ page.version.version }}/setup/sample-setup.md %} - -## Step 2. Get the code - -Clone the code's GitHub repo: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ git clone https://github.com/cockroachlabs/example-app-node-knex -~~~ - -## Step 3. Run the code - -1. Set the `DATABASE_URL` environment variable to the connection string: - -
                              - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ export DATABASE_URL="postgresql://root@localhost:26257?sslmode=disable" - ~~~ - -
                              - -
                              - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ export DATABASE_URL="" - ~~~ - - Where `` is the connection string you copied earlier. - -
                              - - The app uses the connection string saved to the `DATABASE_URL` environment variable to connect to your cluster and execute the code. - -1. Install the app requirements: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ npm install - ~~~ - -1. Run the app: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ npm start - ~~~ - - The output should look like this: - - ~~~ - Initializing accounts table... - New account balances: - { id: 'bc0f7136-7103-4dc4-88fc-102d5bbd312f', balance: '1000' } - { id: '35bde7d0-29c9-4277-a117-3eb80c85ae16', balance: '250' } - { id: '18cc1b2d-be61-4a8d-942c-480f5a6bc207', balance: '0' } - Transferring funds... - New account balances: - { id: 'bc0f7136-7103-4dc4-88fc-102d5bbd312f', balance: '900' } - { id: '35bde7d0-29c9-4277-a117-3eb80c85ae16', balance: '350' } - { id: '18cc1b2d-be61-4a8d-942c-480f5a6bc207', balance: '0' } - Deleting a row... - New account balances: - { id: 'bc0f7136-7103-4dc4-88fc-102d5bbd312f', balance: '900' } - { id: '18cc1b2d-be61-4a8d-942c-480f5a6bc207', balance: '0' } - ~~~ - -## What's next? - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v22.1/build-a-nodejs-app-with-cockroachdb-prisma.md b/src/current/v22.1/build-a-nodejs-app-with-cockroachdb-prisma.md deleted file mode 100644 index 759ef34f8d7..00000000000 --- a/src/current/v22.1/build-a-nodejs-app-with-cockroachdb-prisma.md +++ /dev/null @@ -1,223 +0,0 @@ ---- -title: Build a Simple CRUD Node.js App with CockroachDB and Prisma -summary: Learn how to use CockroachDB from a simple CRUD application that uses the Prisma ORM. -toc: true -twitter: false -referral_id: docs_node_prisma -docs_area: get_started ---- - -{% include {{ page.version.version }}/filter-tabs/crud-js.md %} - -This tutorial shows you how build a simple Node.js application with CockroachDB and [Prisma](https://www.prisma.io). - -## Step 1. Start CockroachDB - -{% include {{ page.version.version }}/setup/sample-setup.md %} - -## Step 2. Get the code - -1. Clone the code's GitHub repo: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ git clone https://github.com/cockroachlabs/example-app-node-prisma - ~~~ - -1. Install the application dependencies: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cd example-app-node-prisma - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ npm install - ~~~ - -## Step 3. Initialize the database - -1. Create a `.env` file for your project, and set the `DATABASE_URL` environment variable to a valid connection string to your cluster. - -
                              - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ echo "DATABASE_URL=" >> .env - ~~~ - - Where `` is the connection string you copied earlier. - -
                              - -
                              - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ echo "DATABASE_URL=postgresql://root@localhost:26257?sslmode=disable" >> .env - ~~~ - - {{site.data.alerts.callout_info}} - `postgresql://root@localhost:26257?sslmode=disable` is the `sql` connection string you obtained earlier from the `cockroach` welcome text. - {{site.data.alerts.end}} - -
                              - - Prisma loads the variables defined in `.env` to the project environment. By default, Prisma uses the `DATABASE_URL` environment variable as the connection string to the database. - -1. Run [Prisma Migrate](https://www.prisma.io/docs/concepts/components/prisma-migrate) to initialize the database with the schema defined in `prisma/prisma.schema`. - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ node_modules/.bin/prisma migrate dev --name init - ~~~ - - You should see the following output: - - ~~~ - Your database is now in sync with your schema. - - ✔ Generated Prisma Client (3.12.0 | library) to ./node_modules/@prisma/client in 73ms - ~~~ - - This command also initializes [Prisma Client](https://www.prisma.io/docs/concepts/components/prisma-client) to communicate with your CockroachDB cluster, based on the configuration in the `prisma/schema.prisma` file. - -## Step 4. Run the code - -The `index.js` file contains the code for `INSERT`, `SELECT`, `UPDATE`, and `DELETE` SQL operations: - -{% include_cached copy-clipboard.html %} -~~~ js -{% remote_include https://raw.githubusercontent.com/cockroachlabs/example-app-node-prisma/main/index.js %} -~~~ - -{{site.data.alerts.callout_info}} -In [production](recommended-production-settings.html#transaction-retries), we recommend implementing [client-side transaction retries](transactions.html#client-side-intervention) for all database operations. -{{site.data.alerts.end}} - -Run the application code: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ node index.js -~~~ - -~~~ -Customer rows inserted. { count: 10 } -Account rows inserted. { count: 10 } -Initial Account row values: - [ - { - id: '1961432f-f93e-4568-b2a5-ba08f73afde5', - customer_id: '079daee3-ecf2-4a0f-980b-4c3ea4c8b6a3', - balance: 914n - }, - { - id: '4ccd7eea-eb47-4aa9-9819-30b5aae58bf8', - customer_id: 'c0eeb465-ab60-4f02-9bf3-3451578d400d', - balance: 176n - }, - { - id: '53ed4f7d-72ee-4390-9487-9bf318357c77', - customer_id: 'a4c9e26e-f9d8-4c1b-ac20-1aa8611b134f', - balance: 54n - }, - { - id: '79a1f1b2-4050-4329-bf52-5df53fec749e', - customer_id: '392c7d15-5ab2-4149-9eee-8a3a44b36e9d', - balance: 482n - }, - { - id: '7e30f1e0-e873-4565-9ea3-3079a48a4886', - customer_id: '12cb3406-264a-417c-b0e6-86593e60dc18', - balance: 478n - }, - { - id: '94f461d5-3985-46c1-98f4-1896f15f0a16', - customer_id: 'e4c909a4-6683-429d-9831-dfcf792f4fb0', - balance: 240n - }, - { - id: 'a0c081f5-fb15-47cc-8dbb-85c6f15677d2', - customer_id: '91ece5f2-df03-4023-b112-2b4d5677981b', - balance: 520n - }, - { - id: 'a45b7c41-2f62-4620-be69-57d5d61186e4', - customer_id: 'c1824327-d6a1-4916-a666-ea157ef2a409', - balance: 50n - }, - { - id: 'dbe0dec5-257b-42ff-9d36-1b5a57e1a4ac', - customer_id: '6739eb2f-bcb1-4074-aab4-5860b04d227d', - balance: 468n - }, - { - id: 'ebc520b4-8df0-4e2f-8426-104594f6341c', - customer_id: 'f83e02cb-77cf-4347-9e0c-28cad65fac34', - balance: 336n - } -] -Account rows updated. { count: 8 } -Updated Account row values: - [ - { - id: '1961432f-f93e-4568-b2a5-ba08f73afde5', - customer_id: '079daee3-ecf2-4a0f-980b-4c3ea4c8b6a3', - balance: 909n - }, - { - id: '4ccd7eea-eb47-4aa9-9819-30b5aae58bf8', - customer_id: 'c0eeb465-ab60-4f02-9bf3-3451578d400d', - balance: 171n - }, - { - id: '53ed4f7d-72ee-4390-9487-9bf318357c77', - customer_id: 'a4c9e26e-f9d8-4c1b-ac20-1aa8611b134f', - balance: 54n - }, - { - id: '79a1f1b2-4050-4329-bf52-5df53fec749e', - customer_id: '392c7d15-5ab2-4149-9eee-8a3a44b36e9d', - balance: 477n - }, - { - id: '7e30f1e0-e873-4565-9ea3-3079a48a4886', - customer_id: '12cb3406-264a-417c-b0e6-86593e60dc18', - balance: 473n - }, - { - id: '94f461d5-3985-46c1-98f4-1896f15f0a16', - customer_id: 'e4c909a4-6683-429d-9831-dfcf792f4fb0', - balance: 235n - }, - { - id: 'a0c081f5-fb15-47cc-8dbb-85c6f15677d2', - customer_id: '91ece5f2-df03-4023-b112-2b4d5677981b', - balance: 515n - }, - { - id: 'a45b7c41-2f62-4620-be69-57d5d61186e4', - customer_id: 'c1824327-d6a1-4916-a666-ea157ef2a409', - balance: 50n - }, - { - id: 'dbe0dec5-257b-42ff-9d36-1b5a57e1a4ac', - customer_id: '6739eb2f-bcb1-4074-aab4-5860b04d227d', - balance: 463n - }, - { - id: 'ebc520b4-8df0-4e2f-8426-104594f6341c', - customer_id: 'f83e02cb-77cf-4347-9e0c-28cad65fac34', - balance: 331n - } -] -All Customer rows deleted. { count: 10 } -~~~ - -## What's next? - -Read more about using [Prisma Client](https://www.prisma.io/docs/). - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v22.1/build-a-nodejs-app-with-cockroachdb-sequelize.md b/src/current/v22.1/build-a-nodejs-app-with-cockroachdb-sequelize.md deleted file mode 100644 index 347ce2a2bc3..00000000000 --- a/src/current/v22.1/build-a-nodejs-app-with-cockroachdb-sequelize.md +++ /dev/null @@ -1,95 +0,0 @@ ---- -title: Build a Node.js App with CockroachDB and Sequelize -summary: Learn how to use CockroachDB from a simple Node.js application with the Sequelize ORM. -toc: true -twitter: false -referral_id: docs_node_sequelize -docs_area: get_started ---- - -{% include {{ page.version.version }}/filter-tabs/crud-js.md %} - -This tutorial shows you how build a simple Node.js application with CockroachDB and the [Sequelize](https://sequelize.org/) ORM. - -## Step 1. Start CockroachDB - -{% include {{ page.version.version }}/setup/sample-setup.md %} - -## Step 2. Get the code - -Clone the sample code's GitHub repo: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ git clone https://github.com/cockroachlabs/example-app-node-sequelize -~~~ - -The sample code uses Sequelize to map Node.js-specific objects to some read and write SQL operations. - -## Step 3. Run the code - -To start the app: - -1. Set the `DATABASE_URL` environment variable to the connection string: - -
                              - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ export DATABASE_URL="postgresql://root@localhost:26257?sslmode=disable" - ~~~ - -
                              - -
                              - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ export DATABASE_URL="" - ~~~ - - Where `` is the connection string you copied earlier. - -
                              - - The app uses the connection string saved to the `DATABASE_URL` environment variable to connect to your cluster and execute the code. - -1. Install the app dependencies: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cd example-app-node-sequelize - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ npm install - ~~~ - -1. Run the code: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ node app.js - ~~~ - - The output should look similar to the following: - - ~~~ shell - Executing (default): SELECT version() AS version - Executing (default): DROP TABLE IF EXISTS "accounts" CASCADE; - Executing (default): SELECT crdb_internal.increment_feature_counter(concat('Sequelize ', '6.17')) - Executing (default): SELECT crdb_internal.increment_feature_counter(concat('sequelize-cockroachdb ', '6.0.5')) - Executing (default): CREATE TABLE IF NOT EXISTS "accounts" ("id" INTEGER , "balance" INTEGER, "createdAt" TIMESTAMP WITH TIME ZONE NOT NULL, "updatedAt" TIMESTAMP WITH TIME ZONE NOT NULL, PRIMARY KEY ("id")); - Executing (default): SELECT i.relname AS name, ix.indisprimary AS primary, ix.indisunique AS unique, ix.indkey AS indkey, array_agg(a.attnum) as column_indexes, array_agg(a.attname) AS column_names, pg_get_indexdef(ix.indexrelid) AS definition FROM pg_class t, pg_class i, pg_index ix, pg_attribute a WHERE t.oid = ix.indrelid AND i.oid = ix.indexrelid AND a.attrelid = t.oid AND t.relkind = 'r' and t.relname = 'accounts' GROUP BY i.relname, ix.indexrelid, ix.indisprimary, ix.indisunique, ix.indkey ORDER BY i.relname; - Executing (default): INSERT INTO "accounts" ("id","balance","createdAt","updatedAt") VALUES (1,1000,'2022-03-30 19:56:34.483 +00:00','2022-03-30 19:56:34.483 +00:00'),(2,250,'2022-03-30 19:56:34.483 +00:00','2022-03-30 19:56:34.483 +00:00') RETURNING "id","balance","createdAt","updatedAt"; - Executing (default): SELECT "id", "balance", "createdAt", "updatedAt" FROM "accounts" AS "accounts"; - 1 1000 - 2 250 - ~~~ - -## What's next? - -Read more about using the [Sequelize ORM](https://sequelize.org/), or check out a more realistic implementation of Sequelize with CockroachDB in our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v22.1/build-a-nodejs-app-with-cockroachdb.md b/src/current/v22.1/build-a-nodejs-app-with-cockroachdb.md deleted file mode 100644 index c7880d57748..00000000000 --- a/src/current/v22.1/build-a-nodejs-app-with-cockroachdb.md +++ /dev/null @@ -1,104 +0,0 @@ ---- -title: Build a Simple CRUD Node.js App with CockroachDB and the node-postgres Driver -summary: Learn how to use CockroachDB from a simple CRUD application that uses the node-postgres driver. -toc: true -twitter: false -referral_id: docs_node_postgres -docs_area: get_started ---- - -{% include {{ page.version.version }}/filter-tabs/crud-js.md %} - -This tutorial shows you how build a simple Node.js application with CockroachDB and the [node-postgres driver](https://node-postgres.com/). - -## Step 1. Start CockroachDB - -{% include {{ page.version.version }}/setup/sample-setup.md %} - -## Step 2. Get the code - -Clone the code's GitHub repo: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ git clone https://github.com/cockroachlabs/example-app-node-postgres -~~~ - -The project has the following directory structure: - -~~~ -├── README.md -├── app.js -├── dbinit.sql -└── package.json -~~~ - -The `dbinit.sql` file initializes the database schema that the application uses: - -{% include_cached copy-clipboard.html %} -~~~ sql -{% remote_include https://raw.githubusercontent.com/cockroachlabs/example-app-node-postgres/main/dbinit.sql %} -~~~ - -The `app.js` file contains the code for `INSERT`, `SELECT`, `UPDATE`, and `DELETE` SQL operations: - -{% include_cached copy-clipboard.html %} -~~~ js -{% remote_include https://raw.githubusercontent.com/cockroachlabs/example-app-node-postgres/main/app.js %} -~~~ - -All of the database operations are wrapped in a helper function named `retryTxn`. This function attempts to commit statements in the context of an explicit transaction. If a [retry error](transaction-retry-error-reference.html) is thrown, the wrapper will retry committing the transaction, with [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff), until the maximum number of retries is reached (by default, 15). - -## Step 3. Initialize the database - -1. Navigate to the `example-app-node-postgres` directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cd example-app-node-postgres - ~~~ - -{% include {{ page.version.version }}/setup/init-bank-sample.md %} - -## Step 4. Run the code - -1. Install the app requirements: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ npm install - ~~~ - -1. Run the app: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ node app.js - ~~~ - - The app uses the connection string saved to the `DATABASE_URL` environment variable to connect to your cluster and execute the code. - - The output should look like this: - - ~~~ - Initializing accounts table... - New account balances: - { id: 'aa0e9b22-0c23-469b-a9e1-b2ace079f44c', balance: '1000' } - { id: 'bf8b96da-2c38-4d55-89a0-b2b6ed63ff9e', balance: '0' } - { id: 'e43d76d6-388e-4ee6-8b73-a063a63a2138', balance: '250' } - Transferring funds... - New account balances: - { id: 'aa0e9b22-0c23-469b-a9e1-b2ace079f44c', balance: '900' } - { id: 'bf8b96da-2c38-4d55-89a0-b2b6ed63ff9e', balance: '0' } - { id: 'e43d76d6-388e-4ee6-8b73-a063a63a2138', balance: '350' } - Deleting a row... - New account balances: - { id: 'aa0e9b22-0c23-469b-a9e1-b2ace079f44c', balance: '900' } - { id: 'e43d76d6-388e-4ee6-8b73-a063a63a2138', balance: '350' } - ~~~ - -## What's next? - -Read more about using the [node-postgres driver](https://www.npmjs.com/package/pg). - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v22.1/build-a-python-app-with-cockroachdb-django.md b/src/current/v22.1/build-a-python-app-with-cockroachdb-django.md deleted file mode 100644 index 24d40e95d8c..00000000000 --- a/src/current/v22.1/build-a-python-app-with-cockroachdb-django.md +++ /dev/null @@ -1,242 +0,0 @@ ---- -title: Build a Python App with CockroachDB and Django -summary: Learn how to use CockroachDB from a simple Django application. -toc: true -twitter: false -referral_id: docs_python_django -docs_area: get_started ---- - -{% include {{ page.version.version }}/filter-tabs/crud-python.md %} - -This tutorial shows you how build a simple Python application with CockroachDB and the [Django](https://www.djangoproject.com/) framework. - -CockroachDB supports Django versions 3.1+. - -{{site.data.alerts.callout_info}} -The example code and instructions on this page use Python 3.9 and Django 3.1. -{{site.data.alerts.end}} - -## Step 1. Start CockroachDB - -{% include {{ page.version.version }}/setup/sample-setup-parameters-certs.md %} - -## Step 2. Get the sample code - -Clone the code's GitHub repo: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ git clone https://github.com/cockroachlabs/example-app-python-django/ -~~~ - -The project directory structure should look like this: - -~~~ -├── Dockerfile -├── README.md -├── cockroach_example -│   ├── cockroach_example -│   │   ├── __init__.py -│   │   ├── asgi.py -│   │   ├── migrations -│   │   │   ├── 0001_initial.py -│   │   │   └── __init__.py -│   │   ├── models.py -│   │   ├── settings.py -│   │   ├── urls.py -│   │   ├── views.py -│   │   └── wsgi.py -│   └── manage.py -└── requirements.txt -~~~ - -## Step 3. Install the application requirements - -To use CockroachDB with Django, the following modules are required: - -- [`django`](https://docs.djangoproject.com/en/3.1/topics/install/) -- [`psycopg2`](https://pypi.org/project/psycopg2/) (recommended for production environments) or [`psycopg2-binary`](https://pypi.org/project/psycopg2-binary/) (recommended for development and testing). -- [`django-cockroachdb`](https://github.com/cockroachdb/django-cockroachdb) - -{{site.data.alerts.callout_info}} -The major version of `django-cockroachdb` must correspond to the major version of `django`. The minor release numbers do not need to match. -{{site.data.alerts.end}} - -The `requirements.txt` file at the top level of the `example-app-python-django` project directory contains a list of the requirements needed to run this application: - -{% include_cached copy-clipboard.html %} -~~~ python -{% remote_include https://raw.githubusercontent.com/cockroachlabs/example-app-python-django/master/requirements.txt %} -~~~ - -This tutorial uses [`virtualenv`](https://virtualenv.pypa.io) for dependency management. - -1. Install `virtualenv`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ pip install virtualenv - ~~~ - -1. At the top level of the app's project directory, create and then activate a virtual environment: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ virtualenv env - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ source env/bin/activate - ~~~ - -1. Install the modules listed in `requirements.txt` to the virtual environment: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ pip install -r requirements.txt - ~~~ - -## Step 4. Build out the application - -
                              - -### Configure the database connection - -Open `cockroach_example/cockroach_example/settings.py`, and configure [the `DATABASES` dictionary](https://docs.djangoproject.com/en/3.2/ref/settings/#databases) to connect to your cluster using the connection parameters that you copied earlier. - -{% include_cached copy-clipboard.html %} -~~~ python -DATABASES = { - 'default': { - 'ENGINE': 'django_cockroachdb', - 'NAME': '{database}', - 'USER': '{username}', - 'PASSWORD': '{password}', - 'HOST': '{host}', - 'PORT': '{port}', - 'OPTIONS': { - 'sslmode': 'verify-full' - }, - }, -} -~~~ - -For more information about configuration a Django connection to CockroachDB {{ site.data.products.serverless }}, see [Connect to a CockroachDB Cluster](connect-to-the-database.html?filters=python&filters=django). - -After you have configured the app's database connection, you can start building out the application. - -
                              - -### Models - -Start by building some [models](https://docs.djangoproject.com/en/3.1/topics/db/models/), defined in a file called `models.py`. You can copy the sample code below and paste it into a new file, or you can download the file directly. - -{% include_cached copy-clipboard.html %} -~~~ python -{% remote_include https://raw.githubusercontent.com/cockroachlabs/example-app-python-django/master/cockroach_example/cockroach_example/models.py %} -~~~ - -In this file, we define some simple classes that map to the tables in the cluster. - -### Views - -Next, build out some [class-based views](https://docs.djangoproject.com/en/3.1/topics/class-based-views/) for the application in a file called `views.py`. You can copy the sample code below and paste it into a new file, or you can download the file directly. - -{% include_cached copy-clipboard.html %} -~~~ python -{% remote_include https://raw.githubusercontent.com/cockroachlabs/example-app-python-django/master/cockroach_example/cockroach_example/views.py %} -~~~ - -This file defines the application's views as classes. Each view class corresponds to one of the table classes defined in `models.py`. The methods of these classes define read and write transactions on the tables in the database. - -Importantly, the file defines a [transaction retry loop](transactions.html#transaction-retries) in the decorator function `retry_on_exception()`. This function decorates each view method, ensuring that transaction ordering guarantees meet the ANSI [SERIALIZABLE](https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable) isolation level. For more information about how transactions (and retries) work, see [Transactions](transactions.html). - -### URL routes - -Lastly, define some [URL routes](https://docs.djangoproject.com/en/3.1/topics/http/urls/) in a file called `urls.py`. You can copy the sample code below and paste it into the existing `urls.py` file, or you can download the file directly and replace the existing one. - -{% include_cached copy-clipboard.html %} -~~~ python -{% remote_include https://raw.githubusercontent.com/cockroachlabs/example-app-python-django/master/cockroach_example/cockroach_example/urls.py %} -~~~ - -## Step 5. Initialize the database - -In the top `cockroach_example` directory, use the [`manage.py` script](https://docs.djangoproject.com/en/3.1/ref/django-admin/) to create [Django migrations](https://docs.djangoproject.com/en/3.1/topics/migrations/) that initialize the database for the application: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ python manage.py makemigrations cockroach_example -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ python manage.py migrate -~~~ - -This initializes the tables defined in `models.py`, in addition to some other tables for the admin functionality included with Django's starter application. - -## Step 6. Run the app - -1. In a different terminal, navigate to the top of the `cockroach_example` directory, and start the app: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ python manage.py runserver 0.0.0.0:8000 - ~~~ - - The output should look like this: - - ~~~ - ... - Starting development server at http://0.0.0.0:8000/ - Quit the server with CONTROL-C. - ~~~ - - To perform simple reads and writes to the database, you can send HTTP requests to the application server listening at `http://0.0.0.0:8000/`. - -1. In a new terminal, use `curl` to send a POST request to the application: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl --header "Content-Type: application/json" \ - --request POST \ - --data '{"name":"Carl"}' http://0.0.0.0:8000/customer/ - ~~~ - - This request inserts a new row into the `cockroach_example_customers` table. - -1. Send a GET request to read from the `cockroach_example_customers` table: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl http://0.0.0.0:8000/customer/ - ~~~ - - ~~~ - [{"id": "bb7d6c4d-efb3-45f8-b790-9911aae7d8b2", "name": "Carl"}] - ~~~ - - You can also query the table directly in the [SQL shell](cockroach-sql.html) to see the changes: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT * FROM cockroach_example_customers; - ~~~ - - ~~~ - id | name - ---------------------------------------+------- - bb7d6c4d-efb3-45f8-b790-9911aae7d8b2 | Carl - (1 row) - ~~~ - -1. Enter **Ctrl+C** to stop the application. - -## What's next? - -Read more about writing a [Django app](https://docs.djangoproject.com/en/3.1/intro/tutorial01/). - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v22.1/build-a-python-app-with-cockroachdb-psycopg3.md b/src/current/v22.1/build-a-python-app-with-cockroachdb-psycopg3.md deleted file mode 100644 index 57a15b1b138..00000000000 --- a/src/current/v22.1/build-a-python-app-with-cockroachdb-psycopg3.md +++ /dev/null @@ -1,104 +0,0 @@ ---- -title: Build a Python App with CockroachDB and Psycopg 3 -summary: Learn how to use CockroachDB from a simple Python application with the Psycopg 3 driver. -toc: true -twitter: false -referral_id: docs_python_psycopg3 -docs_area: get_started ---- - -{% include {{ page.version.version }}/filter-tabs/crud-python.md %} - -{% include cockroach_u_pydev.md %} - -This tutorial shows you how build a simple Python application with CockroachDB and the [Psycopg 3](https://www.psycopg.org/) driver. - -## Step 1. Start CockroachDB - -{% include {{ page.version.version }}/setup/sample-setup-certs.md %} - -## Step 2. Get the sample code - -Clone the sample code's GitHub repo: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ git clone https://github.com/cockroachdb/example-app-python-psycopg3 -~~~ - -The sample code in `example.py` does the following: - -- Creates an `accounts` table and inserts some rows -- Transfers funds between two accounts inside a [transaction](transactions.html) -- Deletes the accounts from the table before exiting so you can re-run the example code - -To [handle transaction retry errors](error-handling-and-troubleshooting.html#transaction-retry-errors), the code uses an application-level retry loop that, in case of error, sleeps before trying the funds transfer again. If it encounters another retry error, it sleeps for a longer interval, implementing [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff). - - -## Step 3. Install the Psycopg 3 driver - -`psycopg[binary]` is the sample app's only third-party module dependency. - -To install `psycopg[binary]`, run the following command: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ pip3 install "psycopg[binary]" -~~~ - -For other ways to install Psycopg, see the [official documentation](https://www.psycopg.org/psycopg3/docs/basic/install.html). - -## Step 4. Run the code - -1. Set the `DATABASE_URL` environment variable to the connection string to your cluster: - -
                              - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ export DATABASE_URL="postgresql://root@localhost:26257?sslmode=disable" - ~~~ - -
                              - -
                              - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ export DATABASE_URL="{connection-string}" - ~~~ - - Where `{connection-string}` is the connection string you copied earlier. - -
                              - - The app uses the connection string saved to the `DATABASE_URL` environment variable to connect to your cluster and execute the code. - -1. Run the code: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cd example-app-python-psycopg3 - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ python3 example.py - ~~~ - - The output should show the account balances before and after the funds transfer: - - ~~~ - Balances at Thu Aug 4 15:51:03 2022: - account id: 2e964b45-2034-49a7-8ab8-c5d0082b71f1 balance: $1000 - account id: 889cb1eb-b747-46f4-afd0-15d70844147f balance: $250 - Balances at Thu Aug 4 15:51:03 2022: - account id: 2e964b45-2034-49a7-8ab8-c5d0082b71f1 balance: $900 - account id: 889cb1eb-b747-46f4-afd0-15d70844147f balance: $350 - ~~~ - -## What's next? - -Read more about using the [Python psycopg3 driver](https://www.psycopg.org/docs/). - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v22.1/build-a-python-app-with-cockroachdb-sqlalchemy.md b/src/current/v22.1/build-a-python-app-with-cockroachdb-sqlalchemy.md deleted file mode 100644 index 2af1cb677f7..00000000000 --- a/src/current/v22.1/build-a-python-app-with-cockroachdb-sqlalchemy.md +++ /dev/null @@ -1,223 +0,0 @@ ---- -title: Build a Simple CRUD Python App with CockroachDB and SQLAlchemy -summary: Learn how to use CockroachDB from a simple Python application with SQLAlchemy. -toc: true -twitter: false -referral_id: docs_python_sqlalchemy -docs_area: get_started ---- - -{% include {{ page.version.version }}/filter-tabs/crud-python.md %} - -{% include cockroach_u_pydev.md %} - -This tutorial shows you how build a simple CRUD Python application with CockroachDB and the [SQLAlchemy](https://docs.sqlalchemy.org/en/latest/) ORM. - -## Step 1. Start CockroachDB - -{% include {{ page.version.version }}/setup/sample-setup-certs.md %} - -## Step 2. Get the code - -Clone the code's GitHub repo: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ git clone https://github.com/cockroachlabs/example-app-python-sqlalchemy/ -~~~ - -The project has the following directory structure: - -~~~ -├── README.md -├── dbinit.sql -├── main.py -├── models.py -└── requirements.txt -~~~ - -The `requirements.txt` file includes the required libraries to connect to CockroachDB with SQLAlchemy, including the [`sqlalchemy-cockroachdb` Python package](https://github.com/cockroachdb/sqlalchemy-cockroachdb), which accounts for some differences between CockroachDB and PostgreSQL: - -{% include_cached copy-clipboard.html %} -~~~ python -{% remote_include https://raw.githubusercontent.com/cockroachlabs/example-app-python-sqlalchemy/master/requirements.txt %} -~~~ - -The `dbinit.sql` file initializes the database schema that the application uses: - -{% include_cached copy-clipboard.html %} -~~~ python -{% remote_include https://raw.githubusercontent.com/cockroachlabs/example-app-python-sqlalchemy/master/dbinit.sql %} -~~~ - -The `models.py` uses SQLAlchemy to map the `Accounts` table to a Python object: - -{% include_cached copy-clipboard.html %} -~~~ python -{% remote_include https://raw.githubusercontent.com/cockroachlabs/example-app-python-sqlalchemy/master/models.py %} -~~~ - -The `main.py` uses SQLAlchemy to map Python methods to SQL operations: - -{% include_cached copy-clipboard.html %} -~~~ python -{% remote_include https://raw.githubusercontent.com/cockroachlabs/example-app-python-sqlalchemy/master/main.py %} -~~~ - -`main.py` also executes the `main` method of the program. - -## Step 3. Install the application requirements - -This tutorial uses [`virtualenv`](https://virtualenv.pypa.io) for dependency management. - -1. Install `virtualenv`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ pip install virtualenv - ~~~ - -1. At the top level of the app's project directory, create and then activate a virtual environment: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ virtualenv env - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ source env/bin/activate - ~~~ - -1. Install the required modules to the virtual environment: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ pip install -r requirements.txt - ~~~ - -## Step 4. Initialize the database - -{% include {{ page.version.version }}/setup/init-bank-sample.md %} - -## Step 5. Run the code - -`main.py` uses the connection string saved to the `DATABASE_URL` environment variable to connect to your cluster and execute the code. - -Run the app: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ python main.py -~~~ - -The application will connect to CockroachDB, and then perform some simple row inserts, updates, and deletes. - -The output should look something like the following: - -~~~ -Creating new accounts... -Created new account with id 3a8b74c8-6a05-4247-9c60-24b46e3a88fd and balance 248835. -Created new account with id c3985926-5b77-4c6d-a73d-7c0d4b2a51e7 and balance 781972. -... -Created new account with id 7b41386c-11d3-465e-a2a0-56e0dcd2e7db and balance 984387. -Random account balances: -Account 7ad14d02-217f-48ca-a53c-2c3a2528a0d9: 800795 -Account 4040aeba-7194-4f29-b8e5-a27ed4c7a297: 149861 -Transferring 400397 from account 7ad14d02-217f-48ca-a53c-2c3a2528a0d9 to account 4040aeba-7194-4f29-b8e5-a27ed4c7a297... -Transfer complete. -New balances: -Account 7ad14d02-217f-48ca-a53c-2c3a2528a0d9: 400398 -Account 4040aeba-7194-4f29-b8e5-a27ed4c7a297: 550258 -Deleting existing accounts... -Deleted account 41247e24-6210-4032-b622-c10b3c7222de. -Deleted account 502450e4-6daa-4ced-869c-4dff62dc52de. -Deleted account 6ff06ef0-423a-4b08-8b87-48af2221bc18. -Deleted account a1acb134-950c-4882-9ac7-6d6fbdaaaee1. -Deleted account e4f33c55-7230-4080-b5ac-5dde8a7ae41d. -~~~ - -In a SQL shell connected to the cluster, you can verify that the rows were inserted, updated, and deleted successfully: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT COUNT(*) FROM accounts; -~~~ - -~~~ - count ---------- - 95 -(1 row) -~~~ - -## Best practices - -### Use the `run_transaction` function - -We strongly recommend using the [`sqlalchemy_cockroachdb.run_transaction()`](https://github.com/cockroachdb/sqlalchemy-cockroachdb/blob/master/sqlalchemy_cockroachdb/transaction.py) function as shown in the code samples on this page. This abstracts the details of [transaction retries](transactions.html#transaction-retries) away from your application code. Transaction retries are more frequent in CockroachDB than in some other databases because we use [optimistic concurrency control](https://en.wikipedia.org/wiki/Optimistic_concurrency_control) rather than locking. Because of this, a CockroachDB transaction may have to be tried more than once before it can commit. This is part of how we ensure that our transaction ordering guarantees meet the ANSI [SERIALIZABLE](https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable) isolation level. - -In addition to the above, using `run_transaction` has the following benefits: - -- Because it must be passed a [sqlalchemy.orm.session.sessionmaker](https://docs.sqlalchemy.org/en/latest/orm/session_api.html#session-and-sessionmaker) object (*not* a [session][session]), it ensures that a new session is created exclusively for use by the callback, which protects you from accidentally reusing objects via any sessions created outside the transaction. -- It abstracts away the [client-side transaction retry logic](transactions.html#client-side-intervention) from your application, which keeps your application code portable across different databases. For example, the sample code given on this page works identically when run against PostgreSQL (modulo changes to the prefix and port number in the connection string). - -For more information about how transactions (and retries) work, see [Transactions](transactions.html). - -### Avoid mutations of session and/or transaction state inside `run_transaction()` - -In general, this is in line with the recommendations of the [SQLAlchemy FAQs](https://docs.sqlalchemy.org/en/latest/orm/session_basics.html#session-frequently-asked-questions), which state (with emphasis added by the original author) that - -> As a general rule, the application should manage the lifecycle of the session *externally* to functions that deal with specific data. This is a fundamental separation of concerns which keeps data-specific operations agnostic of the context in which they access and manipulate that data. - -and - -> Keep the lifecycle of the session (and usually the transaction) **separate and external**. - -In keeping with the above recommendations from the official docs, we **strongly recommend** avoiding any explicit mutations of the transaction state inside the callback passed to `run_transaction`, since that will lead to breakage. Specifically, do not make calls to the following functions from inside `run_transaction`: - -- [`sqlalchemy.orm.Session.commit()`](https://docs.sqlalchemy.org/en/latest/orm/session_api.html?highlight=commit#sqlalchemy.orm.session.Session.commit) (or other variants of `commit()`): This is not necessary because `cockroachdb.sqlalchemy.run_transaction` handles the savepoint/commit logic for you. -- [`sqlalchemy.orm.Session.rollback()`](https://docs.sqlalchemy.org/en/latest/orm/session_api.html?highlight=rollback#sqlalchemy.orm.session.Session.rollback) (or other variants of `rollback()`): This is not necessary because `cockroachdb.sqlalchemy.run_transaction` handles the commit/rollback logic for you. -- [`Session.flush()`][session.flush]: This will not work as expected with CockroachDB because CockroachDB does not support nested transactions, which are necessary for `Session.flush()` to work properly. If the call to `Session.flush()` encounters an error and aborts, it will try to rollback. This will not be allowed by the currently-executing CockroachDB transaction created by `run_transaction()`, and will result in an error message like the following: `sqlalchemy.orm.exc.DetachedInstanceError: Instance is not bound to a Session; attribute refresh operation cannot proceed (Background on this error at: http://sqlalche.me/e/bhk3)`. - -### Break up large transactions into smaller units of work - -If you see an error message like `transaction is too large to complete; try splitting into pieces`, you are trying to commit too much data in a single transaction. As described in our [Cluster Settings](cluster-settings.html) docs, the size limit for transactions is defined by the `kv.transaction.max_intents_bytes` setting, which defaults to 256 KiB. Although this setting can be changed by an admin, we strongly recommend against it in most cases. - -Instead, we recommend breaking your transaction into smaller units of work (or "chunks"). A pattern that works for inserting large numbers of objects using `run_transaction` to handle retries automatically for you is shown below. - -{% include_cached copy-clipboard.html %} -~~~ python -{% include {{page.version.version}}/app/python/sqlalchemy/sqlalchemy-large-txns.py %} -~~~ - -### Use `IMPORT` to read in large data sets - -If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code that uses an ORM and use the [`IMPORT`](import.html) statement instead. It is much faster and more efficient than making a series of [`INSERT`s](insert.html) and [`UPDATE`s](update.html) such as are generated by calls to [`session.bulk_save_objects()`](https://docs.sqlalchemy.org/en/latest/orm/session_api.html?highlight=bulk_save_object#sqlalchemy.orm.session.Session.bulk_save_objects). - -For more information about importing data from PostgreSQL, see [Migrate from PostgreSQL](migrate-from-postgres.html). - -For more information about importing data from MySQL, see [Migrate from MySQL](migrate-from-mysql.html). - -### Prefer the query builder - -In general, we recommend using the query-builder APIs of SQLAlchemy (e.g., [`Engine.execute()`](https://docs.sqlalchemy.org/en/latest/core/connections.html?highlight=execute#sqlalchemy.engine.Engine.execute)) in your application over the [Session][session]/ORM APIs if at all possible. That way, you know exactly what SQL is being generated and sent to CockroachDB, which has the following benefits: - -- It's easier to debug your SQL queries and make sure they are working as expected. -- You can more easily tune SQL query performance by issuing different statements, creating and/or using different indexes, etc. For more information, see [SQL Performance Best Practices](performance-best-practices-overview.html). - -### Joins without foreign keys - -SQLAlchemy relies on the existence of [foreign keys](foreign-key.html) to generate [`JOIN` expressions](joins.html) from your application code. If you remove foreign keys from your schema, SQLAlchemy will not generate joins for you. As a workaround, you can [create a "custom foreign condition" by adding a `relationship` field to your table objects](https://stackoverflow.com/questions/37806625/sqlalchemy-create-relations-but-without-foreign-key-constraint-in-db), or do the equivalent work in your application. - -## See also - -- The [SQLAlchemy](https://docs.sqlalchemy.org/en/latest/) docs -- [Transactions](transactions.html) - -{% include {{page.version.version}}/app/see-also-links.md %} - - - -[session.flush]: https://docs.sqlalchemy.org/en/latest/orm/session_api.html#sqlalchemy.orm.session.Session.flush -[session]: https://docs.sqlalchemy.org/en/latest/orm/session.html diff --git a/src/current/v22.1/build-a-python-app-with-cockroachdb.md b/src/current/v22.1/build-a-python-app-with-cockroachdb.md deleted file mode 100644 index 155f193d7d0..00000000000 --- a/src/current/v22.1/build-a-python-app-with-cockroachdb.md +++ /dev/null @@ -1,104 +0,0 @@ ---- -title: Build a Python App with CockroachDB and psycopg2 -summary: Learn how to use CockroachDB from a simple Python application with the psycopg2 driver. -toc: true -twitter: false -referral_id: docs_python_psycopg2 -docs_area: get_started ---- - -{% include {{ page.version.version }}/filter-tabs/crud-python.md %} - -{% include cockroach_u_pydev.md %} - -This tutorial shows you how build a simple Python application with CockroachDB and the [psycopg2](https://www.psycopg.org/) driver. - -## Step 1. Start CockroachDB - -{% include {{ page.version.version }}/setup/sample-setup-certs.md %} - -## Step 2. Get the sample code - -Clone the sample code's GitHub repo: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ git clone https://github.com/cockroachlabs/hello-world-python-psycopg2 -~~~ - -The sample code in `example.py` does the following: - -- Creates an `accounts` table and inserts some rows -- Transfers funds between two accounts inside a [transaction](transactions.html) -- Deletes the accounts from the table before exiting so you can re-run the example code - -To [handle transaction retry errors](error-handling-and-troubleshooting.html#transaction-retry-errors), the code uses an application-level retry loop that, in case of error, sleeps before trying the funds transfer again. If it encounters another retry error, it sleeps for a longer interval, implementing [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff). - - -## Step 3. Install the psycopg2 driver - -`psycopg2-binary` is the sample app's only third-party module dependency. - -To install `psycopg2-binary`, run the following command: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ pip install psycopg2-binary -~~~ - -For other ways to install psycopg2, see the [official documentation](http://initd.org/psycopg/docs/install.html). - -## Step 4. Run the code - -1. Set the `DATABASE_URL` environment variable to the connection string to your cluster: - -
                              - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ export DATABASE_URL="postgresql://root@localhost:26257/defaultdb?sslmode=disable" - ~~~ - -
                              - -
                              - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ export DATABASE_URL="{connection-string}" - ~~~ - - Where `{connection-string}` is the connection string you copied earlier. - -
                              - - The app uses the connection string saved to the `DATABASE_URL` environment variable to connect to your cluster and execute the code. - -1. Run the code: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cd hello-world-python-psycopg2 - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ python example.py - ~~~ - - The output should show the account balances before and after the funds transfer: - - ~~~ - Balances at Thu Aug 4 15:51:03 2022: - account id: 2e964b45-2034-49a7-8ab8-c5d0082b71f1 balance: $1000 - account id: 889cb1eb-b747-46f4-afd0-15d70844147f balance: $250 - Balances at Thu Aug 4 15:51:03 2022: - account id: 2e964b45-2034-49a7-8ab8-c5d0082b71f1 balance: $900 - account id: 889cb1eb-b747-46f4-afd0-15d70844147f balance: $350 - ~~~ - -## What's next? - -Read more about using the [Python psycopg2 driver](https://www.psycopg.org/docs/). - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v22.1/build-a-ruby-app-with-cockroachdb-activerecord.md b/src/current/v22.1/build-a-ruby-app-with-cockroachdb-activerecord.md deleted file mode 100644 index c404f9bea9c..00000000000 --- a/src/current/v22.1/build-a-ruby-app-with-cockroachdb-activerecord.md +++ /dev/null @@ -1,106 +0,0 @@ ---- -title: Build a Ruby App with CockroachDB and Active Record -summary: Learn how to use CockroachDB from a simple Ruby script with the Active Record gem. -toc: true -twitter: false -referral_id: docs_ruby_activerecord -docs_area: get_started ---- - -{% include {{ page.version.version }}/filter-tabs/crud-ruby.md %} - -This tutorial shows you how build a simple Ruby application with CockroachDB and [Active Record](http://guides.rubyonrails.org/active_record_basics.html). CockroachDB provides an Active Record adapter for CockroachDB as a [RubyGem](https://rubygems.org/gems/activerecord-cockroachdb-adapter). - -{{site.data.alerts.callout_success}} -For a more realistic use of Active Record with CockroachDB in a Rails app, see our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. -{{site.data.alerts.end}} - -## Step 1. Start CockroachDB - -{% include {{ page.version.version }}/setup/sample-setup-certs.md %} - -## Step 2. Get the code - -Clone [the code's GitHub repository](https://github.com/cockroachlabs/example-app-ruby-activerecord). - -{% include_cached copy-clipboard.html %} -~~~ shell -git clone https://github.com/cockroachlabs/example-app-ruby-activerecord -~~~ - -## Step 3. Configure the dependencies - -1. Install `libpq` for your platform. - - For example, to install `libpq` on macOS with Homebrew, run the following command: - - {% include_cached copy-clipboard.html %} - ~~~shell - brew install libpq - ~~~ - -1. Configure `bundle` to use `libpq`. - - For example, if you installed `libpq` on macOS with Homebrew, run the following command from the `example-app-ruby-activerecord` directory: - - {% include_cached copy-clipboard.html %} - ~~~shell - bundle config --local build.pg --with-opt-dir="{libpq-path}" - ~~~ - - Where `{libpq-path}` is the full path to the `libpq` installation on your machine (e.g., `/usr/local/opt/libpq`). - -1. Install the dependencies: - - {% include_cached copy-clipboard.html %} - ~~~shell - bundle install - ~~~ - -## Step 4. Run the code - -1. Set the `DATABASE_URL` environment variable to the connection string to your CockroachDB {{ site.data.products.cloud }} cluster: - -
                              - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ export DATABASE_URL="postgresql://root@localhost:26257?sslmode=disable" - ~~~ - -
                              - -
                              - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ export DATABASE_URL="{connection-string}" - ~~~ - - Where `{connection-string}` is the connection string you copied earlier. - -
                              - - The app uses the connection string saved to the `DATABASE_URL` environment variable to connect to your cluster and execute the code. - -1. Run the code to create a table and insert some rows: - - {% include_cached copy-clipboard.html %} - ~~~ shell - ruby main.rb - ~~~ - - The output should be similar to the following: - - ~~~ - -- create_table(:accounts, {:force=>true, :id=>:integer}) - -> 0.1998s - account: 1 balance: 1000 - account: 2 balance: 250 - ~~~ - -## What's next? - -Read more about using [Active Record](http://guides.rubyonrails.org/active_record_basics.html), or check out a more realistic implementation of Active Record with CockroachDB in a Rails app in our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v22.1/build-a-ruby-app-with-cockroachdb.md b/src/current/v22.1/build-a-ruby-app-with-cockroachdb.md deleted file mode 100644 index 9e57f279e3a..00000000000 --- a/src/current/v22.1/build-a-ruby-app-with-cockroachdb.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: Build a Ruby App with CockroachDB and the Ruby pg Driver -summary: Learn how to use CockroachDB from a simple Ruby application with the pg client driver. -toc: true -twitter: false -referral_id: docs_ruby_pg -docs_area: get_started ---- - -{% include {{ page.version.version }}/filter-tabs/crud-ruby.md %} - -This tutorial shows you how build a simple Ruby application with CockroachDB and the [Ruby pg driver](https://deveiate.org/code/pg/PG/Connection.html). - -## Step 1. Start CockroachDB - -{% include {{ page.version.version }}/setup/sample-setup-certs.md %} - -## Step 2. Get the code - -Clone [the code's GitHub repository](https://github.com/cockroachlabs/hello-world-ruby-pg). - -{% include_cached copy-clipboard.html %} -~~~shell -git clone https://github.com/cockroachlabs/hello-world-ruby-pg -~~~ - -The code connects as the user you created and executes some basic SQL statements: creating a table, inserting rows, and reading and printing the rows. - -## Step 3. Configure the dependencies - -1. Install `libpq` for your platform. - - For example, to install `libpq` on macOS with Homebrew, run the following command: - - {% include_cached copy-clipboard.html %} - ~~~shell - brew install libpq - ~~~ - -1. Configure `bundle` to use `libpq`. - - For example, if you installed `libpq` on macOS with Homebrew, run the following command from the `hello-world-ruby-pg` directory: - - {% include_cached copy-clipboard.html %} - ~~~shell - bundle config --local build.pg --with-opt-dir="{libpq-path}" - ~~~ - - Where `{libpq-path}` is the full path to the `libpq` installation on your machine (e.g., `/usr/local/opt/libpq`). - -1. Install the dependencies: - - {% include_cached copy-clipboard.html %} - ~~~shell - bundle install - ~~~ - -## Step 4. Run the code - -1. Set the `DATABASE_URL` environment variable to the connection string to your CockroachDB {{ site.data.products.cloud }} cluster: - -
                              - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ export DATABASE_URL="postgresql://root@localhost:26257?sslmode=disable" - ~~~ - -
                              - -
                              - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ export DATABASE_URL="{connection-string}" - ~~~ - - Where `{connection-string}` is the connection string you copied earlier. - -
                              - - The app uses the connection string saved to the `DATABASE_URL` environment variable to connect to your cluster and execute the code. - -1. Run the code to create a table and insert some rows: - - {% include_cached copy-clipboard.html %} - ~~~ shell - ruby main.rb - ~~~ - - The output should look similar to the following: - - ~~~ - ------------------------------------------------ - print_balances(): Balances as of '2021-02-23 11:56:54 -0800': - {"id"=>"1", "balance"=>"1000"} - {"id"=>"2", "balance"=>"250"} - ------------------------------------------------ - transfer_funds(): Trying to transfer 100 from account 1 to account 2 - ------------------------------------------------ - print_balances(): Balances as of '2021-02-23 11:56:55 -0800': - {"id"=>"1", "balance"=>"900"} - {"id"=>"2", "balance"=>"350"} - ~~~ - -## What's next? - -Read more about using the [Ruby pg driver](https://rubygems.org/gems/pg). - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v22.1/build-a-rust-app-with-cockroachdb.md b/src/current/v22.1/build-a-rust-app-with-cockroachdb.md deleted file mode 100644 index 088839bd429..00000000000 --- a/src/current/v22.1/build-a-rust-app-with-cockroachdb.md +++ /dev/null @@ -1,141 +0,0 @@ ---- -title: Build a Rust App with CockroachDB and the Rust-Postgres Driver -summary: Learn how to use CockroachDB from a simple Rust application with a low-level client driver. -toc: true -twitter: false -docs_area: get_started ---- - -This tutorial shows you how build a simple Rust application with CockroachDB and the [Rust-Postgres driver](https://github.com/sfackler/rust-postgres). - -## Before you begin - -You must have Rust and Cargo installed. For instructions on installing Rust and Cargo, see the [Cargo documentation](https://doc.rust-lang.org/cargo/getting-started/installation.html). - -## Step 1. Start CockroachDB - -{% include {{ page.version.version }}/setup/sample-setup.md %} - - -## Step 2. Get the code - -Clone the code's GitHub repo: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ git clone https://github.com/cockroachdb/example-app-rust-postgres -~~~ - -The project has the following structure: - -~~~ -├── Cargo.toml -├── LICENSE -├── README.md -└── src - └── main.rs -~~~ - -The `Cargo.toml` file is the configuration file for the example, and sets the dependencies for the project. - -{% include_cached copy-clipboard.html %} -~~~ toml -{% remote_include https://raw.githubusercontent.com/cockroachdb/example-app-rust-postgres/use-uuids/Cargo.toml %} -~~~ - -The `main` function is the entry point for the application, with the code for connecting to the cluster, creating the `accounts` table, creating accounts in that table, and transferring money between two accounts. - -The `execute_txn` function wraps database operations in the context of an explicit transaction. If a [retry error](transaction-retry-error-reference.html) is thrown, the function will retry committing the transaction, with [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff), until the maximum number of retries is reached (by default, 15). - -{{site.data.alerts.callout_info}} -CockroachDB may require the [client to retry a transaction](transactions.html#transaction-retries) in case of read/write [contention](performance-best-practices-overview.html#transaction-contention). CockroachDB provides a generic retry function that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code. -{{site.data.alerts.end}} - -{% include_cached copy-clipboard.html %} -~~~ rust -{% remote_include https://raw.githubusercontent.com/cockroachdb/example-app-rust-postgres/use-uuids/src/main.rs || BEGIN execute_txn || END execute_txn %} -~~~ - -The `transfer_funds` function calls `execute_txn` to perform the actual transfer of funds from one account to the other. - -{% include_cached copy-clipboard.html %} -~~~ rust -{% remote_include https://raw.githubusercontent.com/cockroachdb/example-app-rust-postgres/use-uuids/src/main.rs || BEGIN transfer_funds || END transfer_funds %} -~~~ - -## Step 3. Run the code - -1. In a terminal go to the `example-app-rust-postgres` directory. - - {% include_cached copy-clipboard.html %} - ~~~ shell - cd example-app-rust-postgres - ~~~ - -1. Set the `DATABASE_URL` environment variable to the connection string to your CockroachDB {{ site.data.products.cloud }} cluster: - -
                              - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ export DATABASE_URL="postgresql://root@localhost:26257?sslmode=disable" - ~~~ - -
                              - -
                              - - 1. Edit the connection string you copied earlier and replace `sslmode=verify-full` with `sslmode=require`. - - {{site.data.alerts.callout_danger}} - You **must** change the `sslmode` in your connection string to `sslmode=require`, as the Rust `postgres` driver does not recognize `sslmode=verify-full`. This example uses `postgres-openssl`, which will perform host verification when the `sslmode=require` option is set, so `require` is functionally equivalent to `verify-full`. - {{site.data.alerts.end}} - - For example: - - ~~~ - postgresql://maxroach:ThisIsNotAGoodPassword@blue-dog-147.6wr.cockroachlabs.cloud:26257/bank?sslmode=require - ~~~ - - - 1. Set the `DATABASE_URL` environment variable to the modified connection string. - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ export DATABASE_URL="{connection-string}" - ~~~ - - Where `{connection-string}` is the modified connection string. - -
                              - - The app uses the connection string saved to the `DATABASE_URL` environment variable to connect to your cluster and execute the code. - -1. Run the code to create a table and insert some rows: - - {% include_cached copy-clipboard.html %} - ~~~ shell - cargo run - ~~~ - - The output should look similar to the following: - - ~~~ - Compiling bank v0.1.0 (/Users/maxroach/go/src/github.com/cockroachdb/example-app-rust-postgres) - Finished dev [unoptimized + debuginfo] target(s) in 8.00s - Running `target/debug/bank` - Creating accounts table if it doesn't already exist. - Deleted existing accounts. - Balances before transfer: - account id: 8e88f765-b532-4071-a23d-1b33729d01cb balance: $250 - account id: c6de70e2-78e0-484b-ae5b-6ac2aa43d9ec balance: $1000 - Final balances: - account id: 8e88f765-b532-4071-a23d-1b33729d01cb balance: $350 - account id: c6de70e2-78e0-484b-ae5b-6ac2aa43d9ec balance: $900 - ~~~ - -## What's next? - -Read more about using the Rust-Postgres driver. - -{% include {{ page.version.version }}/app/see-also-links.md %} \ No newline at end of file diff --git a/src/current/v22.1/build-a-spring-app-with-cockroachdb-jdbc.md b/src/current/v22.1/build-a-spring-app-with-cockroachdb-jdbc.md deleted file mode 100644 index 71f6f334b1d..00000000000 --- a/src/current/v22.1/build-a-spring-app-with-cockroachdb-jdbc.md +++ /dev/null @@ -1,816 +0,0 @@ ---- -title: Build a Spring App with CockroachDB and JDBC -summary: Learn how to use CockroachDB from a Spring application with the JDBC driver. -toc: true -twitter: false -referral_id: docs_roach_data_java_spring_jdbc -docs_area: develop ---- - -{% include {{ page.version.version }}/filter-tabs/crud-spring.md %} - -This tutorial shows you how to build a [Spring Boot](https://spring.io/projects/spring-boot) web application with CockroachDB, using the [Spring Data JDBC](https://spring.io/projects/spring-data-jdbc) module for data access. The code for the example application is available for download from [GitHub](https://github.com/cockroachlabs/roach-data/tree/master), along with identical examples that use [JPA](https://github.com/cockroachlabs/roach-data/tree/master/roach-data-jpa), [jOOQ](https://github.com/cockroachlabs/roach-data/tree/master/roach-data-jooq), and [MyBatis](https://github.com/cockroachlabs/roach-data/tree/master/roach-data-mybatis) for data access. - -## Step 1. Start CockroachDB - -Choose whether to run a local cluster or a free CockroachDB {{ site.data.products.cloud }} cluster. - -
                              - - -
                              - -
                              - -1. If you haven't already, [download the CockroachDB binary](install-cockroachdb.html). -1. [Start a local, secure cluster](secure-a-cluster.html). - -
                              - -
                              - -### Create a free cluster - -{% include cockroachcloud/quickstart/create-a-free-cluster.md %} - -### Set up your cluster connection - -{% include cockroachcloud/quickstart/set-up-your-cluster-connection.md %} - -
                              - -## Step 2. Create a database and a user - -
                              - -1. Open a SQL shell to your local cluster using the [`cockroach sql`](cockroach-sql.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --certs-dir={certs-dir} --host=localhost:{port} - ~~~ - - Where `{certs_dir}` is the full path to the `certs` directory that you created when setting up the cluster, and `{port}` is the port at which the cluster is listening for incoming connections. - -1. In the SQL shell, create the `roach_data` database that your application will use: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE roach_data; - ~~~ - -1. Create a SQL user for your app: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER {username} WITH PASSWORD {password}; - ~~~ - - Take note of the username and password. You will use it to connect to the database later. - -1. Give the user the necessary permissions: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > GRANT ALL ON DATABASE roach_data TO {username}; - ~~~ - -1. Exit the shell, and generate a certificate and key for your user by running the following command: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client {user} --certs-dir={certs-dir} --ca-key={certs-dir}/ca.key --also-generate-pkcs8-key -~~~ - -The [`--also-generate-pkcs8-key` flag](cockroach-cert.html#flag-pkcs8) generates a key in [PKCS#8 format](https://tools.ietf.org/html/rfc5208), which is the standard key encoding format in Java. In this case, the generated PKCS8 key will be named `client.{user}.key.pk8`. - -
                              - -
                              - -1. If you haven't already, [download the CockroachDB binary](install-cockroachdb.html). -1. Start the [built-in SQL shell](cockroach-sql.html) using the connection string you got from the CockroachDB {{ site.data.products.cloud }} Console [earlier](#set-up-your-cluster-connection): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - --url 'postgresql://@-..:26257/?sslmode=verify-full&sslrootcert='$HOME'/Library/CockroachCloud/certs/-ca.crt' - ~~~ - -1. Enter your SQL user password. - -1. In the SQL shell, create the `roach_data` database that your application will use: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE roach_data; - ~~~ - -
                              - -## Step 3. Install JDK - -Download and install a Java Development Kit. Spring Boot supports Java versions 8, 11, and 14. In this tutorial, we use [JDK 8 from OpenJDK](https://openjdk.java.net/install/). - -## Step 4. Install Maven - -This example application uses [Maven](http://maven.apache.org/) to manage all application dependencies. Spring supports Maven versions 3.2 and later. - -To install Maven on macOS, run the following command: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ brew install maven -~~~ - -To install Maven on a Debian-based Linux distribution like Ubuntu: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ apt-get install maven -~~~ - -To install Maven on a Red Hat-based Linux distribution like Fedora: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ dnf install maven -~~~ - -For other ways to install Maven, see [its official documentation](http://maven.apache.org/install.html). - -## Step 5. Get the application code - -To get the application code, download or clone the [`roach-data` repository](https://github.com/cockroachlabs/roach-data). The code for the example JDBC application is located under the `roach-data-jdbc` directory. - -(*Optional*) To recreate the application project structure with the same dependencies as those used by this sample application, you can use [Spring initializr](https://start.spring.io/) with the following settings: - -**Project** - -- Maven Project - -**Language** - -- Java - -**Spring Boot** - -- 2.2.6 - -**Project Metadata** - -- Group: io.roach -- Artifact: data -- Name: data -- Package name: io.roach.data -- Packaging: Jar -- Java: 8 - -**Dependencies** - -- Spring Web -- Spring Data JDBC -- Spring Boot Actuator -- Spring HATEOS -- Liquibase Migration -- PostgreSQL Driver - - -## Step 6. Run the application - -Compiling and running the application code will start a web application, initialize the `accounts` table in the `roach_data` database, and submit some requests to the app's REST API that result in [atomic database transactions](transactions.html) on the running CockroachDB cluster. For details about the application code, see [Implementation details](#implementation-details). - -Open the `roach-data/roach-data-jdbc/src/main/resources/application.yml` file and edit the `datasource` settings to connect to your running database cluster: - -
                              - -~~~ yml - ... -datasource: - url: jdbc:postgresql://localhost:{port}/roach_data?ssl=true&sslmode=require&sslrootcert={certs-dir}/ca.crt&sslkey={certs-dir}/client.{username}.key.pk8&sslcert={certs-dir}/client.{username}.crt - username: {username} - password: {password} - driver-class-name: org.postgresql.Driver - ... -~~~ - -Where: - -- `{port}` is the port number. -- `{certs-dir}` is the full path to the certificates directory containing the authentication certificates that you created earlier. -- `{username}` and `{password}` specify the SQL username and password that you created earlier. - -
                              - -
                              - -~~~ yml -... -datasource: - url: jdbc:postgresql://{globalhost}:{port}/{cluster_name}.roach_data?sslmode=verify-full&sslrootcert={path to the CA certificate}/cc-ca.crt - username: {username} - password: {password} - driver-class-name: org.postgresql.Driver -... -~~~ - -{% include {{page.version.version}}/app/cc-free-tier-params.md %} - -
                              - -Open a terminal, and navigate to the `roach-data-jdbc` project subfolder: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cd /roach-data/roach-data-jdbc -~~~ - -Use Maven to download the application dependencies and compile the code: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ mvn clean install -~~~ - -From the `roach-data-jdbc` directory, run the application JAR file: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ java -jar target/roach-data-jdbc.jar -~~~ - -The output should look like the following: - -~~~ -^__^ -(oo)\_______ -(__)\ )\/\ CockroachDB on Spring Data JDBC (v1.0.0.BUILD-SNAPSHOT) - ||----w | powered by Spring Boot (v2.2.7.RELEASE) - || || - -2020-06-17 14:56:54.507 INFO 43008 --- [ main] io.roach.data.jdbc.JdbcApplication : Starting JdbcApplication v1.0.0.BUILD-SNAPSHOT on MyComputer with PID 43008 (path/roach-data/roach-data-jdbc/target/roach-data-jdbc.jar started by user in path/roach-data/roach-data-jdbc) -2020-06-17 14:56:54.510 INFO 43008 --- [ main] io.roach.data.jdbc.JdbcApplication : No active profile set, falling back to default profiles: default -2020-06-17 14:56:55.387 INFO 43008 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JDBC repositories in DEFAULT mode. -2020-06-17 14:56:55.452 INFO 43008 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 59ms. Found 2 JDBC repository interfaces. -2020-06-17 14:56:56.581 INFO 43008 --- [ main] org.eclipse.jetty.util.log : Logging initialized @3378ms to org.eclipse.jetty.util.log.Slf4jLog -2020-06-17 14:56:56.657 INFO 43008 --- [ main] o.s.b.w.e.j.JettyServletWebServerFactory : Server initialized with port: 9090 -2020-06-17 14:56:56.661 INFO 43008 --- [ main] org.eclipse.jetty.server.Server : jetty-9.4.28.v20200408; built: 2020-04-08T17:49:39.557Z; git: ab228fde9e55e9164c738d7fa121f8ac5acd51c9; jvm 11.0.7+10 -2020-06-17 14:56:56.696 INFO 43008 --- [ main] o.e.j.s.h.ContextHandler.application : Initializing Spring embedded WebApplicationContext -2020-06-17 14:56:56.696 INFO 43008 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 2088 ms -2020-06-17 14:56:57.170 INFO 43008 --- [ main] org.eclipse.jetty.server.session : DefaultSessionIdManager workerName=node0 -2020-06-17 14:56:57.171 INFO 43008 --- [ main] org.eclipse.jetty.server.session : No SessionScavenger set, using defaults -2020-06-17 14:56:57.172 INFO 43008 --- [ main] org.eclipse.jetty.server.session : node0 Scavenging every 600000ms -2020-06-17 14:56:57.178 INFO 43008 --- [ main] o.e.jetty.server.handler.ContextHandler : Started o.s.b.w.e.j.JettyEmbeddedWebAppContext@deb3b60{application,/,[file:///private/var/folders/pg/r58v54857gq_1nqm_2tr6lg40000gn/T/jetty-docbase.3049902632643053896.8080/],AVAILABLE} -2020-06-17 14:56:57.179 INFO 43008 --- [ main] org.eclipse.jetty.server.Server : Started @3976ms -2020-06-17 14:56:58.126 INFO 43008 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor' -2020-06-17 14:56:58.369 INFO 43008 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting... -2020-06-17 14:56:58.695 INFO 43008 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed. -2020-06-17 14:56:59.901 INFO 43008 --- [ main] liquibase.executor.jvm.JdbcExecutor : SELECT COUNT(*) FROM public.databasechangeloglock -2020-06-17 14:56:59.917 INFO 43008 --- [ main] liquibase.executor.jvm.JdbcExecutor : CREATE TABLE public.databasechangeloglock (ID INTEGER NOT NULL, LOCKED BOOLEAN NOT NULL, LOCKGRANTED TIMESTAMP WITHOUT TIME ZONE, LOCKEDBY VARCHAR(255), CONSTRAINT DATABASECHANGELOGLOCK_PKEY PRIMARY KEY (ID)) -2020-06-17 14:56:59.930 INFO 43008 --- [ main] liquibase.executor.jvm.JdbcExecutor : SELECT COUNT(*) FROM public.databasechangeloglock -2020-06-17 14:56:59.950 INFO 43008 --- [ main] liquibase.executor.jvm.JdbcExecutor : DELETE FROM public.databasechangeloglock -2020-06-17 14:56:59.953 INFO 43008 --- [ main] liquibase.executor.jvm.JdbcExecutor : INSERT INTO public.databasechangeloglock (ID, LOCKED) VALUES (1, FALSE) -2020-06-17 14:56:59.959 INFO 43008 --- [ main] liquibase.executor.jvm.JdbcExecutor : SELECT LOCKED FROM public.databasechangeloglock WHERE ID=1 -2020-06-17 14:56:59.969 INFO 43008 --- [ main] l.lockservice.StandardLockService : Successfully acquired change log lock -2020-06-17 14:57:01.367 INFO 43008 --- [ main] l.c.StandardChangeLogHistoryService : Creating database history table with name: public.databasechangelog -2020-06-17 14:57:01.369 INFO 43008 --- [ main] liquibase.executor.jvm.JdbcExecutor : CREATE TABLE public.databasechangelog (ID VARCHAR(255) NOT NULL, AUTHOR VARCHAR(255) NOT NULL, FILENAME VARCHAR(255) NOT NULL, DATEEXECUTED TIMESTAMP WITHOUT TIME ZONE NOT NULL, ORDEREXECUTED INTEGER NOT NULL, EXECTYPE VARCHAR(10) NOT NULL, MD5SUM VARCHAR(35), DESCRIPTION VARCHAR(255), COMMENTS VARCHAR(255), TAG VARCHAR(255), LIQUIBASE VARCHAR(20), CONTEXTS VARCHAR(255), LABELS VARCHAR(255), DEPLOYMENT_ID VARCHAR(10)) -2020-06-17 14:57:01.380 INFO 43008 --- [ main] liquibase.executor.jvm.JdbcExecutor : SELECT COUNT(*) FROM public.databasechangelog -2020-06-17 14:57:01.396 INFO 43008 --- [ main] l.c.StandardChangeLogHistoryService : Reading from public.databasechangelog -2020-06-17 14:57:01.397 INFO 43008 --- [ main] liquibase.executor.jvm.JdbcExecutor : SELECT * FROM public.databasechangelog ORDER BY DATEEXECUTED ASC, ORDEREXECUTED ASC -2020-06-17 14:57:01.400 INFO 43008 --- [ main] liquibase.executor.jvm.JdbcExecutor : SELECT COUNT(*) FROM public.databasechangeloglock -2020-06-17 14:57:01.418 INFO 43008 --- [ main] liquibase.executor.jvm.JdbcExecutor : -- DROP TABLE IF EXISTS account cascade; --- DROP TABLE IF EXISTS databasechangelog cascade; --- DROP TABLE IF EXISTS databasechangeloglock cascade; - -create table account -( - id int not null primary key default unique_rowid(), - balance numeric(19, 2) not null, - name varchar(128) not null, - type varchar(25) not null -) -2020-06-17 14:57:01.426 INFO 43008 --- [ main] liquibase.executor.jvm.JdbcExecutor : -- insert into account (id,balance,name,type) values --- (1, 500.00,'Alice','asset'), --- (2, 500.00,'Bob','expense'), --- (3, 500.00,'Bobby Tables','asset'), --- (4, 500.00,'Doris','expense'); -2020-06-17 14:57:01.427 INFO 43008 --- [ main] liquibase.changelog.ChangeSet : SQL in file db/create.sql executed -2020-06-17 14:57:01.430 INFO 43008 --- [ main] liquibase.changelog.ChangeSet : ChangeSet classpath:db/changelog-master.xml::1::root ran successfully in 14ms -2020-06-17 14:57:01.430 INFO 43008 --- [ main] liquibase.executor.jvm.JdbcExecutor : SELECT MAX(ORDEREXECUTED) FROM public.databasechangelog -2020-06-17 14:57:01.441 INFO 43008 --- [ main] liquibase.executor.jvm.JdbcExecutor : INSERT INTO public.databasechangelog (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED, MD5SUM, DESCRIPTION, COMMENTS, EXECTYPE, CONTEXTS, LABELS, LIQUIBASE, DEPLOYMENT_ID) VALUES ('1', 'root', 'classpath:db/changelog-master.xml', NOW(), 1, '8:939a1a8c47676119a94d0173802d207e', 'sqlFile', '', 'EXECUTED', 'crdb', NULL, '3.8.9', '2420221402') -2020-06-17 14:57:01.450 INFO 43008 --- [ main] liquibase.executor.jvm.JdbcExecutor : INSERT INTO public.account (id, name, balance, type) VALUES ('1', 'Alice', 500.00, 'asset') -2020-06-17 14:57:01.459 INFO 43008 --- [ main] liquibase.changelog.ChangeSet : New row inserted into account -2020-06-17 14:57:01.460 INFO 43008 --- [ main] liquibase.executor.jvm.JdbcExecutor : INSERT INTO public.account (id, name, balance, type) VALUES ('2', 'Bob', 500.00, 'expense') -2020-06-17 14:57:01.462 INFO 43008 --- [ main] liquibase.changelog.ChangeSet : New row inserted into account -2020-06-17 14:57:01.462 INFO 43008 --- [ main] liquibase.executor.jvm.JdbcExecutor : INSERT INTO public.account (id, name, balance, type) VALUES ('3', 'Bobby Tables', 500.00, 'asset') -2020-06-17 14:57:01.464 INFO 43008 --- [ main] liquibase.changelog.ChangeSet : New row inserted into account -2020-06-17 14:57:01.465 INFO 43008 --- [ main] liquibase.executor.jvm.JdbcExecutor : INSERT INTO public.account (id, name, balance, type) VALUES ('4', 'Doris', 500.00, 'expense') -2020-06-17 14:57:01.467 INFO 43008 --- [ main] liquibase.changelog.ChangeSet : New row inserted into account -2020-06-17 14:57:01.469 INFO 43008 --- [ main] liquibase.changelog.ChangeSet : ChangeSet classpath:db/changelog-master.xml::2::root ran successfully in 19ms -2020-06-17 14:57:01.470 INFO 43008 --- [ main] liquibase.executor.jvm.JdbcExecutor : INSERT INTO public.databasechangelog (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED, MD5SUM, DESCRIPTION, COMMENTS, EXECTYPE, CONTEXTS, LABELS, LIQUIBASE, DEPLOYMENT_ID) VALUES ('2', 'root', 'classpath:db/changelog-master.xml', NOW(), 2, '8:c2945f2a445cf60b4b203e1a91d14a89', 'insert tableName=account; insert tableName=account; insert tableName=account; insert tableName=account', '', 'EXECUTED', 'crdb', NULL, '3.8.9', '2420221402') -2020-06-17 14:57:01.479 INFO 43008 --- [ main] l.lockservice.StandardLockService : Successfully released change log lock -2020-06-17 14:57:01.555 INFO 43008 --- [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 8 endpoint(s) beneath base path '/actuator' -2020-06-17 14:57:01.610 INFO 43008 --- [ main] o.e.j.s.h.ContextHandler.application : Initializing Spring DispatcherServlet 'dispatcherServlet' -2020-06-17 14:57:01.610 INFO 43008 --- [ main] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet' -2020-06-17 14:57:01.620 INFO 43008 --- [ main] o.s.web.servlet.DispatcherServlet : Completed initialization in 10 ms -2020-06-17 14:57:01.653 INFO 43008 --- [ main] o.e.jetty.server.AbstractConnector : Started ServerConnector@733c423e{HTTP/1.1, (http/1.1)}{0.0.0.0:9090} -2020-06-17 14:57:01.654 INFO 43008 --- [ main] o.s.b.web.embedded.jetty.JettyWebServer : Jetty started on port(s) 9090 (http/1.1) with context path '/' -2020-06-17 14:57:01.657 INFO 43008 --- [ main] io.roach.data.jdbc.JdbcApplication : Started JdbcApplication in 7.92 seconds (JVM running for 8.454) -2020-06-17 14:57:01.659 INFO 43008 --- [ main] io.roach.data.jdbc.JdbcApplication : Lets move some $$ around! -2020-06-17 14:57:03.552 INFO 43008 --- [ main] io.roach.data.jdbc.JdbcApplication : Worker finished - 7 remaining -2020-06-17 14:57:03.606 INFO 43008 --- [ main] io.roach.data.jdbc.JdbcApplication : Worker finished - 6 remaining -2020-06-17 14:57:03.606 INFO 43008 --- [ main] io.roach.data.jdbc.JdbcApplication : Worker finished - 5 remaining -2020-06-17 14:57:03.607 INFO 43008 --- [ main] io.roach.data.jdbc.JdbcApplication : Worker finished - 4 remaining -2020-06-17 14:57:03.608 INFO 43008 --- [ main] io.roach.data.jdbc.JdbcApplication : Worker finished - 3 remaining -2020-06-17 14:57:03.608 INFO 43008 --- [ main] io.roach.data.jdbc.JdbcApplication : Worker finished - 2 remaining -2020-06-17 14:57:03.608 INFO 43008 --- [ main] io.roach.data.jdbc.JdbcApplication : Worker finished - 1 remaining -2020-06-17 14:57:03.608 INFO 43008 --- [ main] io.roach.data.jdbc.JdbcApplication : Worker finished - 0 remaining -2020-06-17 14:57:03.608 INFO 43008 --- [ main] io.roach.data.jdbc.JdbcApplication : All client workers finished but server keeps running. Have a nice day! -~~~ - -As the output states, the application configures a database connection, starts a web servlet listening on the address `http://localhost:9090/`, initializes the `account` table and changelog tables with [Liquibase](https://www.liquibase.org/), and then runs some test operations as requests to the application's REST API. - -For more details about the application code, see [Implementation details](#implementation-details). - -### Query the database - -#### Reads - -The `http://localhost:9090/account` endpoint returns information about all accounts in the database. `GET` requests to these endpoints are executed on the database as `SELECT` statements. - -The following `curl` command sends a `GET` request to the endpoint. The `json_pp` command formats the JSON response. - -{% include_cached copy-clipboard.html %} -~~~ shell -$ curl -X GET http://localhost:9090/account | json_pp -~~~ - -~~~ -{ - "_embedded" : { - "accounts" : [ - { - "_links" : { - "self" : { - "href" : "http://localhost:9090/account/1" - } - }, - "balance" : 500, - "name" : "Alice", - "type" : "asset" - }, - { - "_links" : { - "self" : { - "href" : "http://localhost:9090/account/2" - } - }, - "balance" : 500, - "name" : "Bob", - "type" : "expense" - }, - { - "_links" : { - "self" : { - "href" : "http://localhost:9090/account/3" - } - }, - "balance" : 500, - "name" : "Bobby Tables", - "type" : "asset" - }, - { - "_links" : { - "self" : { - "href" : "http://localhost:9090/account/4" - } - }, - "balance" : 500, - "name" : "Doris", - "type" : "expense" - } - ] - }, - "_links" : { - "self" : { - "href" : "http://localhost:9090/account?page=0&size=5" - } - }, - "page" : { - "number" : 0, - "size" : 5, - "totalElements" : 4, - "totalPages" : 1 - } -} -~~~ - -For a single account, specify the account number in the endpoint. For example, to see information about the accounts `1` and `2`: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ curl -X GET http://localhost:9090/account/1 | json_pp -~~~ - -~~~ -{ - "_links" : { - "self" : { - "href" : "http://localhost:9090/account/1" - } - }, - "balance" : 500, - "name" : "Alice", - "type" : "asset" -} -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ curl -X GET http://localhost:9090/account/2 | json_pp -~~~ - -~~~ -{ - "_links" : { - "self" : { - "href" : "http://localhost:9090/account/2" - } - }, - "balance" : 500, - "name" : "Bob", - "type" : "expense" -} -~~~ - -The `http://localhost:9090/transfer` endpoint performs transfers between accounts. `POST` requests to this endpoint are executed as writes (i.e., [`INSERT`s](insert.html) and [`UPDATE`s](update.html)) to the database. - -#### Writes - -To make a transfer, send a `POST` request to the `transfer` endpoint, using the arguments specified in the `"href`" URL (i.e., `http://localhost:9090/transfer%7B?fromId,toId,amount`). - -{% include_cached copy-clipboard.html %} -~~~ shell -$ curl -X POST -d fromId=2 -d toId=1 -d amount=150 http://localhost:9090/transfer -~~~ - -You can use the `accounts` endpoint to verify that the transfer was successfully completed: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ curl -X GET http://localhost:9090/account/1 | json_pp -~~~ - -~~~ -{ - "_links" : { - "self" : { - "href" : "http://localhost:9090/account/1" - } - }, - "balance" : 350, - "name" : "Alice", - "type" : "asset" -} -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ curl -X GET http://localhost:9090/account/2 | json_pp -~~~ - -~~~ -{ - "_links" : { - "self" : { - "href" : "http://localhost:9090/account/2" - } - }, - "balance" : 650, - "name" : "Bob", - "type" : "expense" -} -~~~ - -### Monitor the application - -`http://localhost:9090/actuator` is the base URL for a number of [Spring Boot Actuator](https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#production-ready) endpoints that let you monitor the activity and health of the application. - -{% include_cached copy-clipboard.html %} -~~~ shell -$ curl -X GET http://localhost:9090/actuator | json_pp -~~~ - -~~~ -{ - "_links" : { - "conditions" : { - "href" : "http://localhost:9090/actuator/conditions", - "templated" : false - }, - "configprops" : { - "href" : "http://localhost:9090/actuator/configprops", - "templated" : false - }, - "env" : { - "href" : "http://localhost:9090/actuator/env", - "templated" : false - }, - "env-toMatch" : { - "href" : "http://localhost:9090/actuator/env/{toMatch}", - "templated" : true - }, - "health" : { - "href" : "http://localhost:9090/actuator/health", - "templated" : false - }, - "health-path" : { - "href" : "http://localhost:9090/actuator/health/{*path}", - "templated" : true - }, - "info" : { - "href" : "http://localhost:9090/actuator/info", - "templated" : false - }, - "liquibase" : { - "href" : "http://localhost:9090/actuator/liquibase", - "templated" : false - }, - "metrics" : { - "href" : "http://localhost:9090/actuator/metrics", - "templated" : false - }, - "metrics-requiredMetricName" : { - "href" : "http://localhost:9090/actuator/metrics/{requiredMetricName}", - "templated" : true - }, - "self" : { - "href" : "http://localhost:9090/actuator", - "templated" : false - }, - "threaddump" : { - "href" : "http://localhost:9090/actuator/threaddump", - "templated" : false - } - } -} -~~~ - -Each actuator endpoint shows specific metrics on the application. For example: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ curl -X GET http://localhost:9090/actuator/health | json_pp -~~~ - -~~~ -{ - "components" : { - "db" : { - "details" : { - "database" : "PostgreSQL", - "result" : 1, - "validationQuery" : "SELECT 1" - }, - "status" : "UP" - }, - "diskSpace" : { - "details" : { - "free" : 125039620096, - "threshold" : 10485760, - "total" : 250685575168 - }, - "status" : "UP" - }, - "ping" : { - "status" : "UP" - } - }, - "status" : "UP" -} -~~~ - -For more information about actuator endpoints, see the [Spring Boot Actuator Endpoint documentation](https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#production-ready-endpoints). - -## Implementation details - -This section guides you through the different components of the application project in detail. - -### Main application process - -`JdbcApplication.java` defines the application's main process. It starts a Spring Boot web application, and then submits requests to the app's REST API that result in database transactions on the CockroachDB cluster. - -Here are the contents of [`JdbcApplication.java`](https://github.com/cockroachlabs/roach-data/blob/master/roach-data-jdbc/src/main/java/io/roach/data/jdbc/JdbcApplication.java): - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/cockroachlabs/roach-data/master/roach-data-jdbc/src/main/java/io/roach/data/jdbc/JdbcApplication.java %} -~~~ - -The annotations listed at the top of the `JdbcApplication` class definition declare some important configuration properties for the entire application: - -- [`@EnableHypermediaSupport`](https://docs.spring.io/spring-hateoas/docs/current/api/org/springframework/hateoas/config/EnableHypermediaSupport.html) enables [hypermedia support for resource representation](https://en.wikipedia.org/wiki/HATEOAS) in the application. Currently, the only hypermedia format supported by Spring is [HAL](https://en.wikipedia.org/wiki/Hypertext_Application_Language), and so the `type = EnableHypermediaSupport.HypermediaType.HAL`. For details, see [Hypermedia representation](#hypermedia-representation). -- [`@EnableJdbcRepositories`](https://docs.spring.io/spring-data/jdbc/docs/current/api/org/springframework/data/jdbc/repository/config/EnableJdbcRepositories.html) enables the creation of [Spring repositories](https://docs.spring.io/spring-data/jdbc/docs/current/reference/html/#jdbc.repositories) for data access using [Spring Data JDBC](https://spring.io/projects/spring-data-jdbc). For details, see [Spring repositories](#spring-repositories). -- [`@EnableAspectJAutoProxy`](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/context/annotation/EnableAspectJAutoProxy.html) enables the use of [`@AspectJ` annotations for declaring aspects](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-ataspectj). For details, see [Transaction management](#transaction-management). -- [`@EnableTransactionManagement`](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/transaction/annotation/EnableTransactionManagement.html) enables [declarative transaction management](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#transaction-declarative) in the application. For details, see [Transaction management](#transaction-management). - - Note that the `@EnableTransactionManagement` annotation is passed an `order` parameter, which indicates the ordering of advice evaluation when a common join point is reached. For details, see [Ordering advice](#ordering-advice). -- [`@SpringBootApplication`](https://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/autoconfigure/SpringBootApplication.html) is a standard configuration annotation used by Spring Boot applications. For details, see [Using the @SpringBootApplication](https://docs.spring.io/spring-boot/docs/current/reference/html/using-spring-boot.html#using-boot-using-springbootapplication-annotation) on the Spring Boot documentation site. - -### Schema management - -To create and initialize the database schema, the application uses [Liquibase](https://www.liquibase.org/). - -#### Liquibase changelogs - -Liquibase uses [changelog files](https://docs.liquibase.com/concepts/basic/changelog.html) to manage database schema changes. Changelog files include a list of instructions, known as [changesets](https://docs.liquibase.com/concepts/basic/changeset.html), that are executed against the database in a specified order. - -`resources/db/changelog-master.xml` defines the changelog for this application: - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/cockroachlabs/roach-data/master/roach-data-jdbc/src/main/resources/db/changelog-master.xml %} -~~~ - -The first changeset uses [the `sqlFile` tag](https://docs.liquibase.com/change-types/community/sql-file.html), which tells Liquibase that an external `.sql` file contains some SQL statements to execute. The file specified by the changeset, `resources/db/create.sql`, creates the `account` table: - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/cockroachlabs/roach-data/master/roach-data-jdbc/src/main/resources/db/create.sql %} -~~~ - -The second changeset in the changelog uses the [Liquibase XML syntax](https://docs.liquibase.com/concepts/basic/xml-format.html) to specify a series of sequential `INSERT` statements that initialize the `account` table with some values. - -When the application is started, all of the queries specified by the changesets are executed in the order specified by their `changeset` tag's `id` value. At application startup, Liquibase also creates a table called [`databasechangelog`](https://docs.liquibase.com/concepts/databasechangelog-table.html) in the database where it performs changes. This table's rows log all completed changesets. - -To see the completed changesets after starting the application, open a new terminal, start the [built-in SQL shell](cockroach-sql.html), and query the `databasechangelog` table: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM roach_data.databasechangelog; -~~~ - -~~~ - id | author | filename | dateexecuted | orderexecuted | exectype | md5sum | description | comments | tag | liquibase | contexts | labels | deployment_id ------+--------+-----------------------------------+----------------------------------+---------------+----------+------------------------------------+--------------------------------------------------------------------------------------------------------+----------+------+-----------+----------+--------+---------------- - 1 | root | classpath:db/changelog-master.xml | 2020-06-17 14:57:01.431506+00:00 | 1 | EXECUTED | 8:939a1a8c47676119a94d0173802d207e | sqlFile | | NULL | 3.8.9 | crdb | NULL | 2420221402 - 2 | root | classpath:db/changelog-master.xml | 2020-06-17 14:57:01.470847+00:00 | 2 | EXECUTED | 8:c2945f2a445cf60b4b203e1a91d14a89 | insert tableName=account; insert tableName=account; insert tableName=account; insert tableName=account | | NULL | 3.8.9 | crdb | NULL | 2420221402 -(2 rows) -~~~ - -{{site.data.alerts.callout_info}} -Liquibase does not [retry transactions](transactions.html#transaction-retries) automatically. If a changeset fails at startup, you might need to restart the application manually to complete the changeset. -{{site.data.alerts.end}} - -#### Liquibase configuration - -Typically, Liquibase properties are defined in a separate [`liquibase.properties`](https://docs.liquibase.com/workflows/liquibase-community/creating-config-properties.html) file. In this application, the [Spring properties](https://docs.spring.io/spring-boot/docs/current/reference/html/appendix-application-properties.html) file, `application.yml`, includes properties that enable and configure Liquibase: - -~~~ yml -... - liquibase: - change-log: classpath:db/changelog-master.xml - default-schema: - drop-first: false - contexts: crdb - enabled: true -... -~~~ - -The `contexts` property specifies a single [Liquibase context](https://docs.liquibase.com/concepts/advanced/contexts.html) (`crdb`). In order for a changeset to run, its `context` attribute must match a context set by this property. The `context` value is `crdb` in both of the changeset definitions in `changelog-master.xml`, so both changesets run at application startup. - -For simplicity, `application.yml` only specifies properties for a single [Spring profile](https://docs.spring.io/spring-boot/docs/current/reference/html/spring-boot-features.html#boot-features-profiles), with a single set of Liquibase properties. If you want the changelog to include changesets that only run in specific environments (e.g., for debugging and development), you can create a new Spring profile in a separate properties file (e.g., `application-dev.yml`), and specify a different set of Liquibase properties for that profile. The profile set by the application configuration will automatically use the properties in that profile's properties file. For information about setting profiles, see the [Spring documentation website](https://docs.spring.io/spring-boot/docs/current/reference/html/spring-boot-features.html#boot-features-profiles). - -### Domain entities - -`Account.java` defines the [domain entity](https://en.wikipedia.org/wiki/Domain-driven_design#Building_blocks) for the `accounts` table. This class is used throughout the application to represent a row of data in the `accounts` table. - -Here are the contents of [`Account.java`](https://github.com/cockroachlabs/roach-data/blob/master/roach-data-jdbc/src/main/java/io/roach/data/jdbc/Account.java): - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/cockroachlabs/roach-data/master/roach-data-jdbc/src/main/java/io/roach/data/jdbc/Account.java %} -~~~ - -### Hypermedia representation - -To represent database objects as [HAL+JSON](https://en.wikipedia.org/wiki/Hypertext_Application_Language) for the REST API, the application extends the Spring HATEOAS module's [RepresentationModel](https://docs.spring.io/spring-hateoas/docs/current/reference/html/#fundamentals.representation-models) class with `AccountModel`. Like the `Account` class, its attributes represent a row of data in the `accounts` table. - -The contents of [`AccountModel.java`](https://github.com/cockroachlabs/roach-data/blob/master/roach-data-jdbc/src/main/java/io/roach/data/jdbc/AccountModel.java): - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/cockroachlabs/roach-data/master/roach-data-jdbc/src/main/java/io/roach/data/jdbc/AccountModel.java %} -~~~ - -We do not go into much detail about hypermedia representation in this tutorial. For more information, see the [Spring HATEOAS Reference Documentation](https://docs.spring.io/spring-hateoas/docs/current/reference/html/). - -### Spring repositories - -To abstract the database layer, Spring applications use the [`Repository` interface](https://docs.spring.io/spring-data/jdbc/docs/current/reference/html/#repositories), or some subinterface of `Repository`. This interface maps to a database object, like a table, and its methods map to queries against that object, like a [`SELECT`](selection-queries.html) or an [`INSERT`](insert.html) statement against a table. - -[`AccountRepository.java`](https://github.com/cockroachlabs/roach-data/blob/master/roach-data-jdbc/src/main/java/io/roach/data/jdbc/AccountRepository.java) defines the main repository for the `accounts` table: - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/cockroachlabs/roach-data/master/roach-data-jdbc/src/main/java/io/roach/data/jdbc/AccountRepository.java %} -~~~ - -`AccountRepository` extends a subinterface of `Repository` that is provided by Spring for generic [CRUD operations](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) called `CrudRepository`. To support [pagination queries](pagination.html), repositories in other Spring Data modules, like those in Spring Data JPA, usually extend a subinterface of `CrudRepository`, called `PagingAndSortingRepository`, that includes pagination and sorting methods. At the time this sample application was created, Spring Data JDBC did not support pagination. As a result, `AccountRepository` extends a custom repository, called `PagedAccountRepository`, to provide basic [`LIMIT`/`OFFSET` pagination](pagination.html) on queries against the `accounts` table. The `AccountRepository` methods use the [`@Query`](https://docs.spring.io/spring-data/jdbc/docs/current/reference/html/#jdbc.query-methods.at-query) annotation strategy to define queries manually, as strings. - -Note that, in addition to having the `@Repository` annotation, the `AccountRepository` interface has a [`@Transactional` annotation](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#transaction-declarative-annotations). When [transaction management](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#transaction-declarative) is enabled in an application (i.e., with [`@EnableTransactionManagement`](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/transaction/annotation/EnableTransactionManagement.html)), Spring automatically wraps all objects with the `@Transactional` annotation in [a proxy](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-understanding-aop-proxies) that handles calls to the object. For more information, see [Understanding the Spring Framework’s Declarative Transaction Implementation](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#tx-decl-explained) on Spring's documentation website. - -`@Transactional` takes a number of parameters, including a `propagation` parameter that determines the transaction propagation behavior around an object (i.e., at what point in the stack a transaction starts and ends). This sample application follows the [entity-control-boundary (ECB) pattern](https://en.wikipedia.org/wiki/Entity-control-boundary). As such, the [REST service boundaries](#rest-controller) should determine where a [transaction](transactions.html) starts and ends rather than the query methods defined in the data access layer. To follow the ECB design pattern, `propagation=MANDATORY` for `AccountRepository`, which means that a transaction must already exist in order to call the `AccountRepository` query methods. In contrast, the `@Transactional` annotations on the [Rest controller entities](#rest-controller) in the web layer have `propagation=REQUIRES_NEW`, meaning that a new transaction must be created for each REST request. - -The aspects declared in `TransactionHintsAspect.java` and `RetryableTransactionAspect.java` further control how `@Transactional`-annotated components are handled. For more details on control flow and transaction management in the application, see [Transaction management](#transaction-management). - -### REST controller - -There are several endpoints exposed by the application's web layer, some of which monitor the health of the application, and some that map to queries executed against the connected database. All of the endpoints served by the application are handled by the `AccountController` class, which is defined in [`AccountController.java`](https://github.com/cockroachlabs/roach-data/blob/master/roach-data-jdbc/src/main/java/io/roach/data/jdbc/AccountController.java): - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/cockroachlabs/roach-data/master/roach-data-jdbc/src/main/java/io/roach/data/jdbc/AccountController.java %} -~~~ - - Annotated with [`@RestController`](https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/web/bind/annotation/RestController.html), `AccountController` defines the primary [web controller](https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller) component of the application. The `AccountController` methods define the endpoints, routes, and business logic of REST services for account querying and money transferring. Its attributes include an instantiation of [`AccountRepository`](#spring-repositories), called `accountRepository`, that establishes an interface to the `accounts` table through the data access layer. - -As mentioned in the [Spring repositories](#spring-repositories) section, the application's transaction boundaries follow the [entity-control-boundary (ECB) pattern](https://en.wikipedia.org/wiki/Entity-control-boundary), meaning that the web service boundaries of the application determine where a [transaction](transactions.html) starts and ends. To follow the ECB pattern, the `@Transactional` annotation on each of the HTTP entities (`listAccounts()`, `getAccount()`, and `transfer()`) has `propagation=REQUIRES_NEW`. This ensures that each time a REST request is made to an endpoint, a new transaction context is created. For details on how aspects handle control flow and transaction management in the application, see [Transaction management](#transaction-management). - -### Transaction management - -When [transaction management](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#transaction-declarative) is enabled in an application, Spring automatically wraps all objects annotated with `@Transactional` in [a proxy](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-understanding-aop-proxies) that handles calls to the object. By default, this proxy starts and closes transactions according to the configured transaction management behavior (e.g., the `propagation` level). - -Using [@AspectJ annotations](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#transaction-declarative-aspectj), this sample application extends the default transaction proxy behavior with two other explicitly-defined [aspects](https://en.wikipedia.org/wiki/Aspect_(computer_programming)): `TransactionHintsAspect` and `RetryableTransactionAspect`. Methods of these aspects are declared as [advice](https://en.wikipedia.org/wiki/Advice_(programming)) to be executed around method calls annotated with `@Transactional`. - -For more information about transaction management in the app, see the following sections below: - -- [Ordering advice](#ordering-advice) -- [Transaction attributes](#transaction-attributes) -- [Transaction retries](#transaction-retries) - -#### Ordering advice - -To determine the order of evaluation when multiple transaction advisors match the same [pointcut](https://en.wikipedia.org/wiki/Pointcut) (in this case, around `@Transactional` method calls), this application explicitly declares an order of precedence for calling advice. - -At the top level of the application, in the main `JdbcApplication.java` file, the [`@EnableTransactionManagement`](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/transaction/annotation/EnableTransactionManagement.html) annotation is passed an `order` parameter. This parameter sets the order on the primary transaction advisor to one level of precedence above the lowest level, `Ordered.LOWEST_PRECEDENCE`. This means that the advisor with the lowest level of precedence is evaluated after the primary transaction advisor (i.e., within the context of an open transaction). - -For the two explicitly-defined aspects, `TransactionHintsAspect` and `RetryableTransactionAspect`, the [`@Order`](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/core/annotation/Order.html) annotation is used. Like the `order` parameter on the `@EnableTransactionManagement` annotation, `@Order` takes a value that indicates the precedence of advice. The advisor with the lowest level of precedence is declared in `TransactionHintsAspect`, the aspect that defines the [transaction attributes](#transaction-attributes). `RetryableTransactionAspect`, the aspect that defines the [transaction retry logic](#transaction-retries), defines the advisor with the highest level of precedence. - -For more details about advice ordering, see [Advice Ordering](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-ataspectj-advice-ordering) on the Spring documentation site. - - -#### Transaction attributes - -The `TransactionHintsAspect` class, declared as an aspect with the [`@Aspect` annotation](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-at-aspectj), declares an advice method that defines the attributes of a transaction. The `@Order` annotation is passed the lowest level of precedence, `Ordered.LOWEST_PRECEDENCE`, indicating that this advisor must run after the main transaction advisor, within the context of a transaction. Here are the contents of [`TransactionHintsAspect.java`](https://github.com/cockroachlabs/roach-data/blob/master/roach-data-jdbc/src/main/java/io/roach/data/jdbc/TransactionHintsAspect.java): - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/cockroachlabs/roach-data/master/roach-data-jdbc/src/main/java/io/roach/data/jdbc/TransactionHintsAspect.java %} -~~~ - -The `anyTransactionBoundaryOperation` method is declared as a pointcut with the [`@Pointcut` annotation](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-pointcuts). In Spring, pointcut declarations must include an expression to determine where [join points](https://en.wikipedia.org/wiki/Join_point) occur in the application control flow. To help define these expressions, Spring supports a set of [designators](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-pointcuts-designators). The application uses two of them here: `execution`, which matches method execution joint points (i.e., defines a joint point when a specific method is executed, in this case, *any* method in the `io.roach.` namespace), and `@annotation`, which limits the matches to methods with a specific annotation, in this case `@Transactional`. - -`setTransactionAttributes` sets the transaction attributes in the form of advice. Spring supports [several different annotations to declare advice](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-advice). The [`@Around` annotation](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-ataspectj-around-advice) allows an advice method to work before and after the `anyTransactionBoundaryOperation(transactional)` join point. It also allows the advice method to call the next matching advisor with the `ProceedingJoinPoint.proceed();` method. - -On verifying that the transaction is active (using `TransactionSynchronizationManager.isActualTransactionActive()`), the advice [sets some session variables](set-vars.html) using methods of the [`JdbcTemplate`](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/jdbc/core/JdbcTemplate.html) object declared at the top of the `TransactionHintsAspect` class definition. These session variables (`application_name`, `statement_timeout`, and `transaction_read_only`) set [the application name](connection-parameters.html#additional-connection-parameters) for the query to "`roach-data`", the time allowed for the statement to execute before timing out to 1000 milliseconds (i.e., 1 second), and the [transaction access mode](set-transaction.html#parameters) as either `READ ONLY` or `READ WRITE`. - -#### Transaction retries - -Transactions may require retries if they experience deadlock or [transaction contention](performance-best-practices-overview.html#transaction-contention) that cannot be resolved without allowing [serialization](demo-serializable.html) anomalies. To handle transactions that are aborted due to transient serialization errors, we highly recommend writing [client-side transaction retry logic](transactions.html#client-side-intervention) into applications written on CockroachDB. - -In this application, transaction retry logic is written into the methods of the `RetryableTransactionAspect` class. This class is declared an aspect with the `@Aspect` annotation. The `@Order` annotation on this aspect class is passed `Ordered.LOWEST_PRECEDENCE-2`, a level of precedence above the primary transaction advisor. This indicates that the transaction retry advisor must run outside the context of a transaction. Here are the contents of [`RetryableTransactionAspect.java`](https://github.com/cockroachlabs/roach-data/blob/master/roach-data-jdbc/src/main/java/io/roach/data/jdbc/RetryableTransactionAspect.java): - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/cockroachlabs/roach-data/master/roach-data-jdbc/src/main/java/io/roach/data/jdbc/RetryableTransactionAspect.java %} -~~~ - -The `anyTransactionBoundaryOperation` pointcut definition is identical to the one declared in `TransactionHintsAspect`. The `execution` designator matches all methods in the `io.roach.` namespace, and the `@annotation` designator limits the matches to methods with the `@Transactional` annotation. - -`retryableOperation` handles the application retry logic in the form of advice. The [`@Around` annotation](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-ataspectj-around-advice) allows the advice method to work before and after an `anyTransactionBoundaryOperation(transactional)` join point. It also allows the advice method to call the next matching advisor. - -`retryableOperation` first verifies that there is no active transaction. It then increments the retry count and attempts to proceed to the next advice method with the `ProceedingJoinPoint.proceed()` method. If the underlying methods (i.e., the primary transaction advisor's methods and the [annotated query methods](#spring-repositories)) succeed, the transaction has been successfully committed to the database. The results are then returned and the application flow continues. If a failure in the underlying layers occurs due to a transient error ([`TransientDataAccessException`](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/dao/TransientDataAccessException.html) or [`TransactionSystemException`](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/transaction/TransactionSystemException.html)), then the transaction is retried. The time between each retry grows with each retry until the maximum number of retries is reached. - -## See also - -Spring documentation: - -- [Spring Boot website](https://spring.io/projects/spring-boot) -- [Spring Framework Overview](https://docs.spring.io/spring/docs/current/spring-framework-reference/overview.html#overview) -- [Spring Core documentation](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#spring-core) -- [Data Access with JDBC](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#jdbc) -- [Spring Web MVC](https://docs.spring.io/spring/docs/current/spring-framework-reference/web.html#mvc) - -CockroachDB documentation: - -- [Learn CockroachDB SQL](learn-cockroachdb-sql.html) -- [Client Connection Parameters](connection-parameters.html) -- [CockroachDB Developer Guide](developer-guide-overview.html) -- [Example Apps](example-apps.html) -- [Transactions](transactions.html) diff --git a/src/current/v22.1/build-a-spring-app-with-cockroachdb-jpa.md b/src/current/v22.1/build-a-spring-app-with-cockroachdb-jpa.md deleted file mode 100644 index d5c3a9dab0d..00000000000 --- a/src/current/v22.1/build-a-spring-app-with-cockroachdb-jpa.md +++ /dev/null @@ -1,706 +0,0 @@ ---- -title: Build a Spring App with CockroachDB and JPA -summary: Learn how to use CockroachDB from a Spring application with Spring Data JPA and Hibernate. -toc: true -twitter: false -referral_id: docs_roach_data_java_spring_jpa -docs_area: develop ---- - -{% include {{ page.version.version }}/filter-tabs/crud-spring.md %} - -This tutorial shows you how to build a [Spring Boot](https://spring.io/projects/spring-boot) web application with CockroachDB, using the [Spring Data JPA](https://spring.io/projects/spring-data-jpa) module for data access. The code for the example application is available for download from [GitHub](https://github.com/cockroachlabs/roach-data/tree/master), along with identical examples that use [JDBC](https://github.com/cockroachlabs/roach-data/tree/master/roach-data-jdbc), [jOOQ](https://github.com/cockroachlabs/roach-data/tree/master/roach-data-jooq), and [MyBatis](https://github.com/cockroachlabs/roach-data/tree/master/roach-data-mybatis) for data access. - -## Step 1. Start CockroachDB - -Choose whether to run a local cluster or a free CockroachDB {{ site.data.products.cloud }} cluster. - -
                              - - -
                              - -
                              - -1. If you haven't already, [download the CockroachDB binary](install-cockroachdb.html). -1. [Start a local, secure cluster](secure-a-cluster.html). - -
                              - -
                              - -### Create a free cluster - -{% include cockroachcloud/quickstart/create-a-free-cluster.md %} - -### Set up your cluster connection - -{% include cockroachcloud/quickstart/set-up-your-cluster-connection.md %} - -
                              - -## Step 2. Create a database and a user - -
                              - -1. Open a SQL shell to your local cluster using the [`cockroach sql`](cockroach-sql.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --certs-dir={certs-dir} --host=localhost:{port} - ~~~ - - Where `{certs_dir}` is the full path to the `certs` directory that you created when setting up the cluster, and `{port}` is the port at which the cluster is listening for incoming connections. - -1. In the SQL shell, create the `roach_data` database that your application will use: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE roach_data; - ~~~ - -1. Create a SQL user for your app: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER {username} WITH PASSWORD {password}; - ~~~ - - Take note of the username and password. You will use it to connect to the database later. - -1. Give the user the necessary permissions: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > GRANT ALL ON DATABASE roach_data TO {username}; - ~~~ - -1. Exit the shell, and generate a certificate and key for your user by running the following command: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client {user} --certs-dir=certs --ca-key={certs-dir}/ca.key --also-generate-pkcs8-key -~~~ - -The [`--also-generate-pkcs8-key` flag](cockroach-cert.html#flag-pkcs8) generates a key in [PKCS#8 format](https://tools.ietf.org/html/rfc5208), which is the standard key encoding format in Java. In this case, the generated PKCS8 key will be named `client.{user}.key.pk8`. - -
                              - -
                              - -1. If you haven't already, [download the CockroachDB binary](install-cockroachdb.html). -1. Start the [built-in SQL shell](cockroach-sql.html) using the connection string you got from the CockroachDB {{ site.data.products.cloud }} Console [earlier](#set-up-your-cluster-connection): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - --url='postgres://{username}:{password}@{global host}:26257/{cluster_name}.defaultdb?sslmode=verify-full&sslrootcert={certs_dir}/cc-ca.crt' - ~~~ - - In the connection string copied from the CockroachDB {{ site.data.products.cloud }} Console, your username, password and cluster name are pre-populated. Replace the `{certs_dir}` placeholder with the path to the `certs` directory that you created [earlier](#set-up-your-cluster-connection). - - {% include cockroachcloud/cc-no-user-certs.md %} - -1. In the SQL shell, create the `roach_data` database that your application will use: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE roach_data; - ~~~ - -
                              - -## Step 3. Install JDK - -Download and install a Java Development Kit on your machine. Spring Boot supports Java versions 8, 11, and 14. In this tutorial, we use [JDK 8 from OpenJDK](https://openjdk.java.net/install/). - -## Step 4. Install Maven - -This example application uses [Maven](http://maven.apache.org/) to manage all application dependencies. Spring supports Maven versions 3.2 and later. - -To install Maven on macOS, run the following command: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ brew install maven -~~~ - -To install Maven on a Debian-based Linux distribution like Ubuntu: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ apt-get install maven -~~~ - -To install Maven on a Red Hat-based Linux distribution like Fedora: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ dnf install maven -~~~ - -For other ways to install Maven, see [its official documentation](http://maven.apache.org/install.html). - -## Step 5. Get the application code - -To get the application code, download or clone the [`roach-data` repository](https://github.com/cockroachlabs/roach-data). The code for the example JPA application is located under the `roach-data-jpa` directory. - -(*Optional*) To recreate the application project structure with the same dependencies as those used by this sample application, you can use [Spring initializr](https://start.spring.io/) with the following settings: - -**Project** - -- Maven Project - -**Language** - -- Java - -**Spring Boot** - -- 2.2.6 - -**Project Metadata** - -- Group: io.roach -- Artifact: data -- Name: data -- Package name: io.roach.data -- Packaging: Jar -- Java: 8 - -**Dependencies** - -- Spring Web -- Spring Data JPA -- Spring HATEOS -- Liquibase Migration -- PostgreSQL Driver - -The [Hibernate CockroachDB dialect](https://in.relation.to/2020/07/27/hibernate-orm-5419-final-release/) is supported in Hibernate v5.4.19+. At the time of writing this tutorial, Spring Data JPA used Hibernate v5.4.15 as its default JPA provider. To specify a different version of Hibernate than the default, add an additional entry to your application's `pom.xml` file, as shown in [the `roach-data` GitHub repo](https://github.com/cockroachlabs/roach-data/blob/master/roach-data-jpa/pom.xml): - -~~~ xml - - org.hibernate - hibernate-core - 5.4.19.Final - -~~~ - -## Step 6. Run the application - -Compiling and running the application code will start a web application, initialize the `accounts` table in the `roach_data` database, and submit some requests to the app's REST API that result in [atomic database transactions](transactions.html) on the running CockroachDB cluster. For details about the application code, see [Implementation details](#implementation-details). - -Open the `roach-data/roach-data-jpa/src/main/resources/application.yml` file and edit the `datasource` settings to connect to your running database cluster: - -
                              - -~~~ yml -... -datasource: - url: jdbc:postgresql://localhost:{port}/roach_data?ssl=true&sslmode=require&sslrootcert={certs-dir}/ca.crt&sslkey={certs-dir}/client.{username}.key.pk8&sslcert={certs-dir}/client.{username}.crt - username: {username} - password: {password} - driver-class-name: org.postgresql.Driver -... -~~~ - -Where: - -- `{port}` is the port number. -- `{certs-dir}` is the full path to the certificates directory containing the authentication certificates that you created earlier. -- `{username}` and `{password}` specify the SQL username and password that you created earlier. - -
                              - -
                              - -~~~ yml -... -datasource: - url: jdbc:postgresql://{globalhost}:{port}/{cluster_name}.roach_data?sslmode=verify-full&sslrootcert={path to the CA certificate}/cc-ca.crt - username: {username} - password: {password} - driver-class-name: org.postgresql.Driver -... -~~~ - -{% include {{page.version.version}}/app/cc-free-tier-params.md %} - -
                              - -Open a terminal, and navigate to the `roach-data-jpa` project subfolder: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cd /roach-data/roach-data-jpa -~~~ - -Use Maven to download the application dependencies and compile the code: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ mvn clean install -~~~ - -From the `roach-data-jpa` directory, run the application JAR file: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ java -jar target/roach-data-jpa.jar -~~~ - -The output should look like the following: - -~~~ - . ____ _ __ _ _ - /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ -( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ - \\/ ___)| |_)| | | | | || (_| | ) ) ) ) - ' |____| .__|_| |_|_| |_\__, | / / / / - =========|_|==============|___/=/_/_/_/ - :: Spring Boot :: (v2.2.7.RELEASE) - -2020-06-22 11:54:46.243 INFO 81343 --- [ main] io.roach.data.jpa.JpaApplication : Starting JpaApplication v1.0.0.BUILD-SNAPSHOT on MyComputer.local with PID 81343 (path/code/roach-data/roach-data-jpa/target/roach-data-jpa.jar started by user in path/code/roach-data/roach-data-jpa) -2020-06-22 11:54:46.246 INFO 81343 --- [ main] io.roach.data.jpa.JpaApplication : No active profile set, falling back to default profiles: default -2020-06-22 11:54:46.929 INFO 81343 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Multiple Spring Data modules found, entering strict repository configuration mode! -2020-06-22 11:54:46.930 INFO 81343 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFAULT mode. -2020-06-22 11:54:47.023 INFO 81343 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 80ms. Found 1 JPA repository interfaces. -2020-06-22 11:54:47.211 INFO 81343 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Multiple Spring Data modules found, entering strict repository configuration mode! -2020-06-22 11:54:47.211 INFO 81343 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JDBC repositories in DEFAULT mode. -2020-06-22 11:54:47.224 INFO 81343 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data JDBC - Could not safely identify store assignment for repository candidate interface io.roach.data.jpa.AccountRepository. If you want this repository to be a JDBC repository, consider annotating your entities with one of these annotations: org.springframework.data.relational.core.mapping.Table. -2020-06-22 11:54:47.224 INFO 81343 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 12ms. Found 0 JDBC repository interfaces. -2020-06-22 11:54:47.913 INFO 81343 --- [ main] org.eclipse.jetty.util.log : Logging initialized @2990ms to org.eclipse.jetty.util.log.Slf4jLog -2020-06-22 11:54:47.982 INFO 81343 --- [ main] o.s.b.w.e.j.JettyServletWebServerFactory : Server initialized with port: 9090 -2020-06-22 11:54:47.985 INFO 81343 --- [ main] org.eclipse.jetty.server.Server : jetty-9.4.28.v20200408; built: 2020-04-08T17:49:39.557Z; git: ab228fde9e55e9164c738d7fa121f8ac5acd51c9; jvm 11.0.7+10 -2020-06-22 11:54:48.008 INFO 81343 --- [ main] o.e.j.s.h.ContextHandler.application : Initializing Spring embedded WebApplicationContext -2020-06-22 11:54:48.008 INFO 81343 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 1671 ms -2020-06-22 11:54:48.123 INFO 81343 --- [ main] org.eclipse.jetty.server.session : DefaultSessionIdManager workerName=node0 -2020-06-22 11:54:48.123 INFO 81343 --- [ main] org.eclipse.jetty.server.session : No SessionScavenger set, using defaults -2020-06-22 11:54:48.124 INFO 81343 --- [ main] org.eclipse.jetty.server.session : node0 Scavenging every 660000ms -2020-06-22 11:54:48.130 INFO 81343 --- [ main] o.e.jetty.server.handler.ContextHandler : Started o.s.b.w.e.j.JettyEmbeddedWebAppContext@41394595{application,/,[file:///private/var/folders/pg/r58v54857gq_1nqm_2tr6lg40000gn/T/jetty-docbase.7785392427958606416.8080/],AVAILABLE} -2020-06-22 11:54:48.131 INFO 81343 --- [ main] org.eclipse.jetty.server.Server : Started @3207ms -2020-06-22 11:54:48.201 INFO 81343 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting... -2020-06-22 11:54:48.483 INFO 81343 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed. -2020-06-22 11:54:49.507 INFO 81343 --- [ main] liquibase.executor.jvm.JdbcExecutor : SELECT COUNT(*) FROM public.databasechangeloglock -2020-06-22 11:54:49.522 INFO 81343 --- [ main] liquibase.executor.jvm.JdbcExecutor : CREATE TABLE public.databasechangeloglock (ID INTEGER NOT NULL, LOCKED BOOLEAN NOT NULL, LOCKGRANTED TIMESTAMP WITHOUT TIME ZONE, LOCKEDBY VARCHAR(255), CONSTRAINT DATABASECHANGELOGLOCK_PKEY PRIMARY KEY (ID)) -2020-06-22 11:54:49.535 INFO 81343 --- [ main] liquibase.executor.jvm.JdbcExecutor : SELECT COUNT(*) FROM public.databasechangeloglock -2020-06-22 11:54:49.554 INFO 81343 --- [ main] liquibase.executor.jvm.JdbcExecutor : DELETE FROM public.databasechangeloglock -2020-06-22 11:54:49.555 INFO 81343 --- [ main] liquibase.executor.jvm.JdbcExecutor : INSERT INTO public.databasechangeloglock (ID, LOCKED) VALUES (1, FALSE) -2020-06-22 11:54:49.562 INFO 81343 --- [ main] liquibase.executor.jvm.JdbcExecutor : SELECT LOCKED FROM public.databasechangeloglock WHERE ID=1 -2020-06-22 11:54:49.570 INFO 81343 --- [ main] l.lockservice.StandardLockService : Successfully acquired change log lock -2020-06-22 11:54:50.519 INFO 81343 --- [ main] l.c.StandardChangeLogHistoryService : Creating database history table with name: public.databasechangelog -2020-06-22 11:54:50.520 INFO 81343 --- [ main] liquibase.executor.jvm.JdbcExecutor : CREATE TABLE public.databasechangelog (ID VARCHAR(255) NOT NULL, AUTHOR VARCHAR(255) NOT NULL, FILENAME VARCHAR(255) NOT NULL, DATEEXECUTED TIMESTAMP WITHOUT TIME ZONE NOT NULL, ORDEREXECUTED INTEGER NOT NULL, EXECTYPE VARCHAR(10) NOT NULL, MD5SUM VARCHAR(35), DESCRIPTION VARCHAR(255), COMMENTS VARCHAR(255), TAG VARCHAR(255), LIQUIBASE VARCHAR(20), CONTEXTS VARCHAR(255), LABELS VARCHAR(255), DEPLOYMENT_ID VARCHAR(10)) -2020-06-22 11:54:50.534 INFO 81343 --- [ main] liquibase.executor.jvm.JdbcExecutor : SELECT COUNT(*) FROM public.databasechangelog -2020-06-22 11:54:50.547 INFO 81343 --- [ main] l.c.StandardChangeLogHistoryService : Reading from public.databasechangelog -2020-06-22 11:54:50.548 INFO 81343 --- [ main] liquibase.executor.jvm.JdbcExecutor : SELECT * FROM public.databasechangelog ORDER BY DATEEXECUTED ASC, ORDEREXECUTED ASC -2020-06-22 11:54:50.550 INFO 81343 --- [ main] liquibase.executor.jvm.JdbcExecutor : SELECT COUNT(*) FROM public.databasechangeloglock -2020-06-22 11:54:50.566 INFO 81343 --- [ main] liquibase.executor.jvm.JdbcExecutor : create table account -( - id int not null primary key default unique_rowid(), - balance numeric(19, 2) not null, - name varchar(128) not null, - type varchar(25) not null -) -2020-06-22 11:54:50.575 INFO 81343 --- [ main] liquibase.changelog.ChangeSet : SQL in file db/create.sql executed -2020-06-22 11:54:50.581 INFO 81343 --- [ main] liquibase.changelog.ChangeSet : ChangeSet classpath:db/changelog-master.xml::1::root ran successfully in 16ms -2020-06-22 11:54:50.585 INFO 81343 --- [ main] liquibase.executor.jvm.JdbcExecutor : SELECT MAX(ORDEREXECUTED) FROM public.databasechangelog -2020-06-22 11:54:50.589 INFO 81343 --- [ main] liquibase.executor.jvm.JdbcExecutor : INSERT INTO public.databasechangelog (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED, MD5SUM, DESCRIPTION, COMMENTS, EXECTYPE, CONTEXTS, LABELS, LIQUIBASE, DEPLOYMENT_ID) VALUES ('1', 'root', 'classpath:db/changelog-master.xml', NOW(), 1, '8:567321cdb0100cbe76731a7ed414674b', 'sqlFile', '', 'EXECUTED', 'crdb', NULL, '3.8.9', '2852090551') -2020-06-22 11:54:50.593 INFO 81343 --- [ main] liquibase.executor.jvm.JdbcExecutor : INSERT INTO public.account (id, name, balance, type) VALUES ('1', 'Alice', 500.00, 'asset') -2020-06-22 11:54:50.601 INFO 81343 --- [ main] liquibase.changelog.ChangeSet : New row inserted into account -2020-06-22 11:54:50.602 INFO 81343 --- [ main] liquibase.executor.jvm.JdbcExecutor : INSERT INTO public.account (id, name, balance, type) VALUES ('2', 'Bob', 500.00, 'expense') -2020-06-22 11:54:50.603 INFO 81343 --- [ main] liquibase.changelog.ChangeSet : New row inserted into account -2020-06-22 11:54:50.604 INFO 81343 --- [ main] liquibase.executor.jvm.JdbcExecutor : INSERT INTO public.account (id, name, balance, type) VALUES ('3', 'Bobby Tables', 500.00, 'asset') -2020-06-22 11:54:50.605 INFO 81343 --- [ main] liquibase.changelog.ChangeSet : New row inserted into account -2020-06-22 11:54:50.605 INFO 81343 --- [ main] liquibase.executor.jvm.JdbcExecutor : INSERT INTO public.account (id, name, balance, type) VALUES ('4', 'Doris', 500.00, 'expense') -2020-06-22 11:54:50.606 INFO 81343 --- [ main] liquibase.changelog.ChangeSet : New row inserted into account -2020-06-22 11:54:50.608 INFO 81343 --- [ main] liquibase.changelog.ChangeSet : ChangeSet classpath:db/changelog-master.xml::2::root ran successfully in 16ms -2020-06-22 11:54:50.609 INFO 81343 --- [ main] liquibase.executor.jvm.JdbcExecutor : INSERT INTO public.databasechangelog (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED, MD5SUM, DESCRIPTION, COMMENTS, EXECTYPE, CONTEXTS, LABELS, LIQUIBASE, DEPLOYMENT_ID) VALUES ('2', 'root', 'classpath:db/changelog-master.xml', NOW(), 2, '8:c2945f2a445cf60b4b203e1a91d14a89', 'insert tableName=account; insert tableName=account; insert tableName=account; insert tableName=account', '', 'EXECUTED', 'crdb', NULL, '3.8.9', '2852090551') -2020-06-22 11:54:50.615 INFO 81343 --- [ main] l.lockservice.StandardLockService : Successfully released change log lock -2020-06-22 11:54:50.727 INFO 81343 --- [ main] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing PersistenceUnitInfo [name: default] -2020-06-22 11:54:50.817 INFO 81343 --- [ main] org.hibernate.Version : HHH000412: Hibernate ORM core version 5.4.19.Final -2020-06-22 11:54:50.993 INFO 81343 --- [ main] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Commons Annotations {5.1.0.Final} -2020-06-22 11:54:51.154 INFO 81343 --- [ main] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org.hibernate.dialect.CockroachDB201Dialect -2020-06-22 11:54:51.875 INFO 81343 --- [ main] o.h.e.t.j.p.i.JtaPlatformInitiator : HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] -2020-06-22 11:54:51.886 INFO 81343 --- [ main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default' -2020-06-22 11:54:52.700 INFO 81343 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor' -2020-06-22 11:54:52.958 INFO 81343 --- [ main] o.e.j.s.h.ContextHandler.application : Initializing Spring DispatcherServlet 'dispatcherServlet' -2020-06-22 11:54:52.958 INFO 81343 --- [ main] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet' -2020-06-22 11:54:52.966 INFO 81343 --- [ main] o.s.web.servlet.DispatcherServlet : Completed initialization in 8 ms -2020-06-22 11:54:52.997 INFO 81343 --- [ main] o.e.jetty.server.AbstractConnector : Started ServerConnector@1568159{HTTP/1.1, (http/1.1)}{0.0.0.0:9090} -2020-06-22 11:54:52.999 INFO 81343 --- [ main] o.s.b.web.embedded.jetty.JettyWebServer : Jetty started on port(s) 9090 (http/1.1) with context path '/' -2020-06-22 11:54:53.001 INFO 81343 --- [ main] io.roach.data.jpa.JpaApplication : Started JpaApplication in 7.518 seconds (JVM running for 8.077) -2020-06-22 11:54:53.002 INFO 81343 --- [ main] io.roach.data.jpa.JpaApplication : Lets move some $$ around! -2020-06-22 11:54:54.399 INFO 81343 --- [ main] io.roach.data.jpa.JpaApplication : Worker finished - 7 remaining -2020-06-22 11:54:54.447 INFO 81343 --- [ main] io.roach.data.jpa.JpaApplication : Worker finished - 6 remaining -2020-06-22 11:54:54.447 INFO 81343 --- [ main] io.roach.data.jpa.JpaApplication : Worker finished - 5 remaining -2020-06-22 11:54:54.447 INFO 81343 --- [ main] io.roach.data.jpa.JpaApplication : Worker finished - 4 remaining -2020-06-22 11:54:54.447 INFO 81343 --- [ main] io.roach.data.jpa.JpaApplication : Worker finished - 3 remaining -2020-06-22 11:54:54.447 INFO 81343 --- [ main] io.roach.data.jpa.JpaApplication : Worker finished - 2 remaining -2020-06-22 11:54:54.447 INFO 81343 --- [ main] io.roach.data.jpa.JpaApplication : Worker finished - 1 remaining -2020-06-22 11:54:54.447 INFO 81343 --- [ main] io.roach.data.jpa.JpaApplication : Worker finished - 0 remaining -2020-06-22 11:54:54.447 INFO 81343 --- [ main] io.roach.data.jpa.JpaApplication : All client workers finished but server keeps running. Have a nice day! -~~~ - -As the output states, the application configures a database connection, starts a web servlet listening on the address `http://localhost:9090/`, initializes the `account` table and changelog tables with [Liquibase](https://www.liquibase.org/), and then runs some test operations as requests to the application's REST API. - -For more details about the application code, see [Implementation details](#implementation-details). - -### Query the database - -#### Reads - -The `http://localhost:9090/account` endpoint returns information about all accounts in the database. `GET` requests to these endpoints are executed on the database as `SELECT` statements. - -The following `curl` command sends a `GET` request to the endpoint. The `json_pp` command formats the JSON response. - -{% include_cached copy-clipboard.html %} -~~~ shell -$ curl -X GET http://localhost:9090/account | json_pp -~~~ - -~~~ -{ - "_embedded" : { - "accounts" : [ - { - "_links" : { - "self" : { - "href" : "http://localhost:9090/account/1" - } - }, - "balance" : 500, - "name" : "Alice", - "type" : "asset" - }, - { - "_links" : { - "self" : { - "href" : "http://localhost:9090/account/2" - } - }, - "balance" : 500, - "name" : "Bob", - "type" : "expense" - }, - { - "_links" : { - "self" : { - "href" : "http://localhost:9090/account/3" - } - }, - "balance" : 500, - "name" : "Bobby Tables", - "type" : "asset" - }, - { - "_links" : { - "self" : { - "href" : "http://localhost:9090/account/4" - } - }, - "balance" : 500, - "name" : "Doris", - "type" : "expense" - } - ] - }, - "_links" : { - "self" : { - "href" : "http://localhost:9090/account?page=0&size=5" - } - }, - "page" : { - "number" : 0, - "size" : 5, - "totalElements" : 4, - "totalPages" : 1 - } -} -~~~ - -For a single account, specify the account number in the endpoint. For example, to see information about the accounts `1` and `2`: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ curl -X GET http://localhost:9090/account/1 | json_pp -~~~ - -~~~ -{ - "_links" : { - "self" : { - "href" : "http://localhost:9090/account/1" - } - }, - "balance" : 500, - "name" : "Alice", - "type" : "asset" -} -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ curl -X GET http://localhost:9090/account/2 | json_pp -~~~ - -~~~ -{ - "_links" : { - "self" : { - "href" : "http://localhost:9090/account/2" - } - }, - "balance" : 500, - "name" : "Bob", - "type" : "expense" -} -~~~ - -The `http://localhost:9090/transfer` endpoint performs transfers between accounts. `POST` requests to this endpoint are executed as writes (i.e., [`INSERT`s](insert.html) and [`UPDATE`s](update.html)) to the database. - -#### Writes - -To make a transfer, send a `POST` request to the `transfer` endpoint, using the arguments specified in the `"href`" URL (i.e., `http://localhost:9090/transfer%7B?fromId,toId,amount`). - -{% include_cached copy-clipboard.html %} -~~~ shell -$ curl -X POST -d fromId=2 -d toId=1 -d amount=150 http://localhost:9090/transfer -~~~ - -You can use the `accounts` endpoint to verify that the transfer was successfully completed: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ curl -X GET http://localhost:9090/account/1 | json_pp -~~~ - -~~~ -{ - "_links" : { - "self" : { - "href" : "http://localhost:9090/account/1" - } - }, - "balance" : 650, - "name" : "Alice", - "type" : "asset" -} -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ curl -X GET http://localhost:9090/account/2 | json_pp -~~~ - -~~~ -{ - "_links" : { - "self" : { - "href" : "http://localhost:9090/account/2" - } - }, - "balance" : 350, - "name" : "Bob", - "type" : "expense" -} -~~~ - -## Implementation details - -This section guides you through the different components of the application project in detail. - -### Main application process - -`JpaApplication.java` defines the application's main process. It starts a Spring Boot web application, and then submits requests to the app's REST API that result in database transactions on the CockroachDB cluster. - -Here are the contents of [`JpaApplication.java`](https://github.com/cockroachlabs/roach-data/blob/master/roach-data-jpa/src/main/java/io/roach/data/jpa/JpaApplication.java): - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/cockroachlabs/roach-data/master/roach-data-jpa/src/main/java/io/roach/data/jpa/JpaApplication.java %} -~~~ - -The annotations listed at the top of the `JpaApplication` class definition declare some important configuration properties for the entire application: - -- [`@EnableHypermediaSupport`](https://docs.spring.io/spring-hateoas/docs/current/api/org/springframework/hateoas/config/EnableHypermediaSupport.html) enables [hypermedia support for resource representation](https://en.wikipedia.org/wiki/HATEOAS) in the application. Currently, the only hypermedia format supported by Spring is [HAL](https://en.wikipedia.org/wiki/Hypertext_Application_Language), and so the `type = EnableHypermediaSupport.HypermediaType.HAL`. For details, see [Hypermedia representation](#hypermedia-representation). -- [`@EnableJpaRepositories`](https://docs.spring.io/spring-data/data-jpa/docs/current/api/org/springframework/data/jpa/repository/config/EnableJpaRepositories.html) enables the creation of [Spring repositories](https://docs.spring.io/spring-data/jpa/docs/current/reference/html/#repositories) for data access using [Spring Data JPA](https://spring.io/projects/spring-data-jpa). For details, see [Spring repositories](#spring-repositories). -- [`@EnableAspectJAutoProxy`](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/context/annotation/EnableAspectJAutoProxy.html) enables the use of [`@AspectJ` annotations for declaring aspects](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-ataspectj). For details, see [Transaction management](#transaction-management). -- [`@EnableTransactionManagement`](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/transaction/annotation/EnableTransactionManagement.html) enables [declarative transaction management](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#transaction-declarative) in the application. For details, see [Transaction management](#transaction-management). - - Note that the `@EnableTransactionManagement` annotation is passed an `order` parameter, which indicates the ordering of advice evaluation when a common join point is reached. For details, see [Ordering advice](#ordering-advice). -- [`@SpringBootApplication`](https://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/autoconfigure/SpringBootApplication.html) is a standard configuration annotation used by Spring Boot applications. For details, see [Using the @SpringBootApplication](https://docs.spring.io/spring-boot/docs/current/reference/html/using-spring-boot.html#using-boot-using-springbootapplication-annotation) on the Spring Boot documentation site. - -### Schema management - -To create and initialize the database schema, the application uses [Liquibase](https://www.liquibase.org/). - -Liquibase uses files called [changelogs](https://docs.liquibase.com/concepts/basic/changelog.html) to manage the changes to the database. Changelog files include a list of instructions, known as [changesets](https://docs.liquibase.com/concepts/basic/changeset.html), that are executed against the database in a specified order. - -`resources/db/changelog-master.xml` defines the changelog for this application: - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/cockroachlabs/roach-data/master/roach-data-jpa/src/main/resources/db/changelog-master.xml %} -~~~ - -The first changeset uses [the `sqlFile` tag](https://docs.liquibase.com/change-types/community/sql-file.html), which tells Liquibase that an external `.sql` file contains some SQL statements to execute. The file specified by the changeset, `resources/db/create.sql`, creates the `account` table: - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/cockroachlabs/roach-data/master/roach-data-jpa/src/main/resources/db/create.sql %} -~~~ - -The second changeset in the changelog uses the [Liquibase XML syntax](https://docs.liquibase.com/concepts/basic/xml-format.html) to specify a series of sequential `INSERT` statements that initialize the `account` table with some values. - -When the application is started, all of the queries specified by the changesets are executed in the order specified by their `changeset` tag's `id` value. At application startup, Liquibase also creates a table called [`databasechangelog`](https://docs.liquibase.com/concepts/databasechangelog-table.html) in the database where it performs changes. This table's rows log all completed changesets. - -To see the completed changesets, open a new terminal, start the [built-in SQL shell](cockroach-sql.html), and query the `databasechangelog` table: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs -~~~ - - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM roach_data.databasechangelog; -~~~ - -~~~ - id | author | filename | dateexecuted | orderexecuted | exectype | md5sum | description | comments | tag | liquibase | contexts | labels | deployment_id ------+--------+-----------------------------------+----------------------------------+---------------+----------+------------------------------------+--------------------------------------------------------------------------------------------------------+----------+------+-----------+----------+--------+---------------- - 1 | root | classpath:db/changelog-master.xml | 2020-06-17 14:57:01.431506+00:00 | 1 | EXECUTED | 8:939a1a8c47676119a94d0173802d207e | sqlFile | | NULL | 3.8.9 | crdb | NULL | 2420221402 - 2 | root | classpath:db/changelog-master.xml | 2020-06-17 14:57:01.470847+00:00 | 2 | EXECUTED | 8:c2945f2a445cf60b4b203e1a91d14a89 | insert tableName=account; insert tableName=account; insert tableName=account; insert tableName=account | | NULL | 3.8.9 | crdb | NULL | 2420221402 -(2 rows) -~~~ - -Typically, Liquibase properties are defined in a separate [`liquibase.properties`](https://docs.liquibase.com/workflows/liquibase-community/creating-config-properties.html) file. In this application, `application.yml`, the properties file that sets the [Spring properties](https://docs.spring.io/spring-boot/docs/current/reference/html/appendix-application-properties.html), also includes some properties that enable and configure Liquibase: - -~~~ yml -... - liquibase: - change-log: classpath:db/changelog-master.xml - default-schema: - drop-first: false - contexts: crdb - enabled: true -... -~~~ - -### Domain entities - -`Account.java` defines the [entity](https://en.wikipedia.org/wiki/Domain-driven_design#Building_blocks) for the `accounts` table. This class is used throughout the application to represent a row of data in the `accounts` table. - -Here are the contents of [`Account.java`](https://github.com/cockroachlabs/roach-data/tree/master/roach-data-jpa/src/main/java/io/roach/data/jpa/Account.java): - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/cockroachlabs/roach-data/master/roach-data-jpa/src/main/java/io/roach/data/jpa/Account.java %} -~~~ - -Spring Data JPA supports standard Java Persistence API (JPA) annotations for domain entity class definitions. The `Account` class definition uses these annotations to create the `accounts` table entity: - -- `@Entity` declares the `Account` an entity class. -- `@Table` associates the entity with the persisted `account` table. -- `@Column` declare each private attribute a column of the `account` table. -- `@GeneratedValue` indicates that the value for the column should be automatically generated. -- `@Id` declares the [primary key column](primary-key.html) of the table. -- `@Enumerated` specifies the type of data that the column holds. - -### Hypermedia representation - -To represent database objects as [HAL+JSON](https://en.wikipedia.org/wiki/Hypertext_Application_Language) for the REST API, the application extends the Spring HATEOAS module's [RepresentationModel](https://docs.spring.io/spring-hateoas/docs/current/reference/html/#fundamentals.representation-models) class with `AccountModel`. Like the `Account` class, its attributes represent a row of data in the `accounts` table. - -The contents of [`AccountModel.java`](https://github.com/cockroachlabs/roach-data/tree/master/roach-data-jpa/src/main/java/io/roach/data/jpa/AccountModel.java): - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/cockroachlabs/roach-data/master/roach-data-jpa/src/main/java/io/roach/data/jpa/AccountModel.java %} -~~~ - -We do not go into much detail about hypermedia representation in this tutorial. For more information, see the [Spring HATEOAS Reference Documentation](https://docs.spring.io/spring-hateoas/docs/current/reference/html/). - -### Spring repositories - -To abstract the database layer, Spring applications use the [`Repository` interface](https://docs.spring.io/spring-data/jpa/docs/current/reference/html/#repositories), or some subinterface of `Repository`. This interface maps to a database object, like a table, and its methods map to queries against that object, like a [`SELECT`](selection-queries.html) or an [`INSERT`](insert.html) statement against a table. - -[`AccountRepository.java`](https://github.com/cockroachlabs/roach-data/tree/master/roach-data-jpa/src/main/java/io/roach/data/jpa/AccountRepository.java) defines the main repository for the `accounts` table: - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/cockroachlabs/roach-data/master/roach-data-jpa/src/main/java/io/roach/data/jpa/AccountRepository.java %} -~~~ - -`AccountRepository` extends a subinterface of `Repository` that is provided by Spring for JPA data access called `JpaRepository`. The `AccountRepository` methods use the [`@Query`](https://docs.spring.io/spring-data/jpa/docs/current/reference/html/#jpa.query-methods.at-query) annotation strategy to define queries manually, as strings. - -Note that, in addition to having the `@Repository` annotation, the `AccountRepository` interface has a [`@Transactional` annotation](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#transaction-declarative-annotations). When [transaction management](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#transaction-declarative) is enabled in an application (i.e., with [`@EnableTransactionManagement`](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/transaction/annotation/EnableTransactionManagement.html)), Spring automatically wraps all objects with the `@Transactional` annotation in [a proxy](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-understanding-aop-proxies) that handles calls to the object. - -`@Transactional` takes a number of parameters, including a `propagation` parameter that determines the transaction propagation behavior around an object (i.e., at what point in the stack a transaction starts and ends). This sample application follows the [entity-control-boundary (ECB) pattern](https://en.wikipedia.org/wiki/Entity-control-boundary). As such, the [REST service boundaries](#rest-controller) should determine where a [transaction](transactions.html) starts and ends rather than the query methods defined in the data access layer. To follow the ECB design pattern, `propagation=MANDATORY` for `AccountRepository`, which means that a transaction must already exist in order to call the `AccountRepository` query methods. In contrast, the `@Transactional` annotations on the [Rest controller entities](#rest-controller) in the web layer have `propagation=REQUIRES_NEW`, meaning that a new transaction must be created for each REST request. - -For details about control flow and transaction management in this application, see [Transaction management](#transaction-management). For more general information about Spring transaction management, see [Understanding the Spring Framework’s Declarative Transaction Implementation](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#tx-decl-explained) on Spring's documentation website. - -### REST controller - -There are several endpoints exposed by the application's web layer, some of which monitor the health of the application, and some that map to queries executed against the connected database. All of the endpoints served by the application are handled by the `AccountController` class, which is defined in [`AccountController.java`](https://github.com/cockroachlabs/roach-data/tree/master/roach-data-jpa/src/main/java/io/roach/data/jpa/AccountController.java): - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/cockroachlabs/roach-data/master/roach-data-jpa/src/main/java/io/roach/data/jpa/AccountController.java %} -~~~ - - Annotated with [`@RestController`](https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/web/bind/annotation/RestController.html), `AccountController` defines the primary [web controller](https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller) component of the application. The `AccountController` methods define the endpoints, routes, and business logic of REST services for account querying and money transferring. Its attributes include an instantiation of [`AccountRepository`](#spring-repositories), called `accountRepository`, that establishes an interface to the `accounts` table through the data access layer. - -As mentioned in the [Spring repositories](#spring-repositories) section, the application's transaction boundaries follow the [entity-control-boundary (ECB) pattern](https://en.wikipedia.org/wiki/Entity-control-boundary), meaning that the web service boundaries of the application determine where a [transaction](transactions.html) starts and ends. To follow the ECB pattern, the `@Transactional` annotation on each of the HTTP entities (`listAccounts()`, `getAccount()`, and `transfer()`) has `propagation=REQUIRES_NEW`. This ensures that each time a REST request is made to an endpoint, a new transaction context is created. - -For details on how aspects handle control flow and transaction management in the application, see [Transaction management](#transaction-management). - -### Transaction management - -When [transaction management](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#transaction-declarative) is enabled in an application, Spring automatically wraps all objects annotated with `@Transactional` in [a proxy](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-understanding-aop-proxies) that handles calls to the object. By default, this proxy starts and closes transactions according to the configured transaction management behavior (e.g., the `propagation` level). The proxy methods that handle transactions make up the *primary transaction advisor*. - -Using [@AspectJ annotations](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#transaction-declarative-aspectj), this sample application extends the default transaction proxy behavior to handle [transaction retries](#transaction-retries) with another explicitly-defined [aspect](https://en.wikipedia.org/wiki/Aspect_(computer_programming)): `RetryableTransactionAspect`. Methods of this aspect are declared as [advice](https://en.wikipedia.org/wiki/Advice_(programming)) to be executed around method calls annotated with `@Transactional`. - -#### Ordering advice - -To determine the order of evaluation when multiple transaction advisors match the same [pointcut](https://en.wikipedia.org/wiki/Pointcut) (in this case, around `@Transactional` method calls), this application explicitly declares an order of precedence for calling advice. - -The [`@Order`](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/core/annotation/Order.html) annotation takes a value that indicates the precedence of its advice. In the case of `RetryableTransactionAspect`, the annotation is passed `Ordered.LOWEST_PRECEDENCE-1`, which places the retry advisor one level of precedence above the lowest level. By default, the primary transaction advisor has the lowest level of precedence (`Ordered.LOWEST_PRECEDENCE`). This means that the retry logic will be evaluated before a transaction is opened. - -For more details about advice ordering, see [Advice Ordering](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-ataspectj-advice-ordering) on the Spring documentation site. - -#### Transaction retries - -Transactions may require retries if they experience deadlock or [transaction contention](performance-best-practices-overview.html#transaction-contention) that cannot be resolved without allowing [serialization](demo-serializable.html) anomalies. To handle transactions that are aborted due to transient serialization errors, we highly recommend writing [client-side transaction retry logic](transactions.html#client-side-intervention) into applications written on CockroachDB. - -In this application, transaction retry logic is written into the methods of the `RetryableTransactionAspect` class, declared an aspect with the `@Aspect` annotation. Here are the contents of [`RetryableTransactionAspect.java`](https://github.com/cockroachlabs/roach-data/blob/master/roach-data-jpa/src/main/java/io/roach/data/jpa/RetryableTransactionAspect.java): - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/cockroachlabs/roach-data/master/roach-data-jpa/src/main/java/io/roach/data/jpa/RetryableTransactionAspect.java %} -~~~ - -The `anyTransactionBoundaryOperation` method is declared as a pointcut with the [`@Pointcut` annotation](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-pointcuts). In Spring, pointcut declarations must include an expression to determine where [join points](https://en.wikipedia.org/wiki/Join_point) occur in the application control flow. To help define these expressions, Spring supports a set of [designators](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-pointcuts-designators). The application uses two of them here: `execution`, which matches method execution joint points (i.e., defines a joint point when a specific method is executed, in this case, *any* method in the `io.roach.` namespace), and `@annotation`, which limits the matches to methods with a specific annotation, in this case `@Transactional`. - -`retryableOperation` handles the application retry logic in the form of advice. Spring supports [several different annotations to declare advice](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-advice). The [`@Around` annotation](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-ataspectj-around-advice) allows an advice method to work before and after the `anyTransactionBoundaryOperation(transactional)` join point. It also allows the advice method to call the next matching advisor with the `ProceedingJoinPoint.proceed();` method. - -`retryableOperation` first verifies that there is no active transaction. It then increments the retry count and attempts to proceed to the next advice method with the `ProceedingJoinPoint.proceed()` method. If the underlying methods (i.e., the primary transaction advisor's methods and the [annotated query methods](#spring-repositories)) succeed, the transaction has been successfully committed to the database. The results are then returned and the application flow continues. If a failure in the underlying layers occurs due to a transient error, then the transaction is retried. The time between each retry grows with each retry until the maximum number of retries is reached. - -## See also - -Spring documentation: - -- [Spring Boot website](https://spring.io/projects/spring-boot) -- [Spring Framework Overview](https://docs.spring.io/spring/docs/current/spring-framework-reference/overview.html#overview) -- [Spring Core documentation](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#spring-core) -- [Accessing Data with JPA](https://spring.io/guides/gs/accessing-data-jpa/) -- [Data Access with JDBC](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#jpa) -- [Spring Web MVC](https://docs.spring.io/spring/docs/current/spring-framework-reference/web.html#mvc) - -CockroachDB documentation: - -- [Learn CockroachDB SQL](learn-cockroachdb-sql.html) -- [Client Connection Parameters](connection-parameters.html) -- [CockroachDB Developer Guide](developer-guide-overview.html) -- [Example Apps](example-apps.html) -- [Transactions](transactions.html) diff --git a/src/current/v22.1/build-a-spring-app-with-cockroachdb-mybatis.md b/src/current/v22.1/build-a-spring-app-with-cockroachdb-mybatis.md deleted file mode 100644 index 22fdcbfbb28..00000000000 --- a/src/current/v22.1/build-a-spring-app-with-cockroachdb-mybatis.md +++ /dev/null @@ -1,414 +0,0 @@ ---- -title: Build a Spring App with CockroachDB and MyBatis -summary: Learn how to use CockroachDB from a simple Spring application with MyBatis. -toc: true -twitter: false -docs_area: get_started ---- - -{% include {{ page.version.version }}/filter-tabs/crud-java.md %} - -This tutorial shows you how to build a simple [Spring Boot](https://spring.io/projects/spring-boot) application with CockroachDB, using the [MyBatis-Spring-Boot-Starter module](http://mybatis.org/spring-boot-starter) for data access. - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install JDK - -Download and install a Java Development Kit. MyBatis-Spring supports Java versions 8+. In this tutorial, we use [JDK 11 from OpenJDK](https://openjdk.java.net/install/). - -## Step 2. Install Gradle - -This example application uses [Gradle](https://gradle.org/) to manage all application dependencies. Spring supports Gradle versions 6+. - -To install Gradle on macOS, run the following command: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ brew install gradle -~~~ - -To install Gradle on a Debian-based Linux distribution like Ubuntu: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ apt-get install gradle -~~~ - -To install Gradle on a Red Hat-based Linux distribution like Fedora: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ dnf install gradle -~~~ - -For other ways to install Gradle, see [its official documentation](https://docs.gradle.org/current/userguide/installation.html). - -## Step 3. Get the application code - -To get the application code, download or clone the [`mybatis-cockroach-demo` repository](https://github.com/jeffgbutler/mybatis-cockroach-demo). - -## Step 4. Create the `maxroach` user and `bank` database - -
                              - -Start the [built-in SQL shell](cockroach-sql.html): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs -~~~ - -In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE USER IF NOT EXISTS maxroach; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -Give the `bank` user the necessary permissions: - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT ALL ON DATABASE bank TO maxroach; -~~~ - -Exit the SQL shell: - -{% include_cached copy-clipboard.html %} -~~~ sql -> \q -~~~ - -## Step 5. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key --also-generate-pkcs8-key -~~~ - -The [`--also-generate-pkcs8-key` flag](cockroach-cert.html#flag-pkcs8) generates a key in [PKCS#8 format](https://tools.ietf.org/html/rfc5208), which is the standard key encoding format in Java. In this case, the generated PKCS8 key will be named `client.maxroach.key.pk8`. - -## Step 6. Run the application - -To run the application: - -1. Open and edit the `src/main/resources/application.yml` file so that the `url` field specifies the full [connection string](connection-parameters.html#connect-using-a-url) to the [running CockroachDB cluster](#before-you-begin). To connect to a secure cluster, this connection string must set the `sslmode` connection parameter to `require`, and specify the full path to the client, node, and user certificates in the connection parameters. For example: - - {% include_cached copy-clipboard.html %} - ~~~ yml - ... - datasource: - url: jdbc:postgresql://localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key.pk8&sslcert=certs/client.maxroach.crt - ... - ~~~ -1. Open a terminal, and navigate to the `mybatis-cockroach-demo` project directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cd /mybatis-cockroach-demo - ~~~ - -1. Run the Gradle script to download the application dependencies, compile the code, and run the application: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ./gradlew bootRun - ~~~ - -
                              - -
                              - -Start the [built-in SQL shell](cockroach-sql.html): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE USER IF NOT EXISTS maxroach; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -Give the `bank` user the necessary permissions: - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT ALL ON DATABASE bank TO maxroach; -~~~ - -Exit the SQL shell: - -{% include_cached copy-clipboard.html %} -~~~ sql -> \q -~~~ - -## Step 6. Run the application - -To run the application: - -1. Open and edit the `src/main/resources/application.yml` file so that the `url` field specifies the full [connection string](connection-parameters.html#connect-using-a-url) to the [running CockroachDB cluster](#before-you-begin). For example: - - ~~~ yaml - ... - datasource: - url: jdbc:postgresql://localhost:26257/bank?ssl=false - ... - ~~~ -1. Open a terminal, and navigate to the `mybatis-cockroach-demo` project directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cd /mybatis-cockroach-demo - ~~~ - -1. Run the Gradle script to download the application dependencies, compile the code, and run the application: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ./gradlew bootRun - ~~~ - -
                              - -The output should look like the following: - -~~~ -> Task :bootRun - - . ____ _ __ _ _ - /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ -( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ - \\/ ___)| |_)| | | | | || (_| | ) ) ) ) - ' |____| .__|_| |_|_| |_\__, | / / / / - =========|_|==============|___/=/_/_/_/ - :: Spring Boot :: (v2.2.6.RELEASE) - -2020-06-01 14:40:04.333 INFO 55970 --- [ main] c.e.c.CockroachDemoApplication : Starting CockroachDemoApplication on MyComputer with PID 55970 (path/mybatis-cockroach-demo/build/classes/java/main started by user in path/mybatis-cockroach-demo) -2020-06-01 14:40:04.335 INFO 55970 --- [ main] c.e.c.CockroachDemoApplication : No active profile set, falling back to default profiles: default -2020-06-01 14:40:05.195 INFO 55970 --- [ main] c.e.c.CockroachDemoApplication : Started CockroachDemoApplication in 1.39 seconds (JVM running for 1.792) -2020-06-01 14:40:05.216 INFO 55970 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting... -2020-06-01 14:40:05.611 INFO 55970 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed. -deleteAllAccounts: - => 2 total deleted accounts -insertAccounts: - => 2 total new accounts in 1 batches -printNumberOfAccounts: - => Number of accounts at time '14:40:05.660226': - => 2 total accounts -printBalances: - => Account balances at time '14:40:05.678942': - ID 1 => $1000 - ID 2 => $250 -transferFunds: - => $100 transferred between accounts 1 and 2, 2 rows updated -printBalances: - => Account balances at time '14:40:05.688511': - ID 1 => $900 - ID 2 => $350 -bulkInsertRandomAccountData: - => finished, 500 total rows inserted in 1 batches -printNumberOfAccounts: - => Number of accounts at time '14:40:05.960214': - => 502 total accounts -2020-06-01 14:40:05.968 INFO 55970 --- [extShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated... -2020-06-01 14:40:05.993 INFO 55970 --- [extShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed. - -BUILD SUCCESSFUL in 12s -3 actionable tasks: 3 executed -~~~ - -The application runs a number of test functions that result in reads and writes to the `accounts` table in the `bank` database. - -For more details about the application code, see [Application details](#application-details). - -## Application details - -This section guides you through the different components of the application project in detail. - -### Main process - -The main process of the application is defined in `src/main/java/com/example/cockroachdemo/CockroachDemoApplication.java`: - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/jeffgbutler/mybatis-cockroach-demo/master/src/main/java/com/example/cockroachdemo/CockroachDemoApplication.java %} -~~~ - -The `SpringApplication.run` call in the `main` method bootstraps and launches a Spring application. The [`@SpringBootApplication` annotation](https://docs.spring.io/spring-boot/docs/current/reference/html/using-spring-boot.html#using-boot-using-springbootapplication-annotation) on the `CockroachDemoApplication` class triggers Spring's [component scanning](https://docs.spring.io/spring-boot/docs/current/reference/html/using-spring-boot.html#using-boot-structuring-your-code) and [auto-configuration](https://docs.spring.io/spring-boot/docs/current/reference/html/using-spring-boot.html#using-boot-auto-configuration) features. - -The `BasicExample` class, defined in `src/main/java/com/example/cockroachdemo/BasicExample.java`, is one of the components detected in the component scan: - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/jeffgbutler/mybatis-cockroach-demo/master/src/main/java/com/example/cockroachdemo/BasicExample.java %} -~~~ - -`BasicExample` implements the [Spring `CommandLineRunner` interface](https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#boot-features-command-line-runner). Implementations of this interface automatically run when detected in a Spring project directory. `BasicExample` runs a series of test methods that are eventually executed as SQL queries in the [data access layer of the application](#mappers). - -### Configuration - -All [MyBatis-Spring](https://mybatis.org/spring/) applications need a [`DataSource`](https://docs.spring.io/spring-boot/docs/current/reference/html/spring-boot-features.html#boot-features-configure-datasource), a [`SqlSessionFactory`](https://mybatis.org/spring/factorybean.html), and at least one [mapper interface](https://mybatis.org/spring/mappers.html). The [MyBatis-Spring-Boot-Starter](https://mybatis.org/spring-boot-starter/mybatis-spring-boot-autoconfigure) module, built on [MyBatis](https://mybatis.org/mybatis-3/) and MyBatis-Spring, and used by this application, greatly simplifies how you configure each of these required elements. - -Applications that use MyBatis-Spring-Boot-Starter typically need just an annotated mapper interface and an existing `DataSource` in the Spring environment. The module detects the `DataSource`, creates a `SqlSessionFactory` from the `DataSource`, creates a thread-safe [`SqlSessionTemplate`](https://mybatis.org/spring/sqlsession.html#SqlSessionTemplate) with the `SqlSessionFactory`, and then auto-scans the mappers and links them to the `SqlSessionTemplate` for injection. The `SqlSessionTemplate` automatically commits, rolls back, and closes sessions, based on the application's [Spring-based transaction configuration](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#transaction). - -This sample application implements [batch write operations](insert.html#performance-best-practices), a CockroachDB best practice for executing multiple `INSERT` and `UPSERT` statements. MyBatis applications that support batch operations require some additional configuration work, even if the application uses MyBatis-Spring-Boot-Starter: - -- The application must define a specific mapper interface for batch query methods. -- The application must define a `SqlSessionTemplate` constructor, specifically for batch operations, that uses the [`BATCH` executor type](https://mybatis.org/mybatis-3/apidocs/reference/org/apache/ibatis/executor/BatchExecutor.html). -- The batch mapper must be explicitly registered with the batch-specific `SqlSessionTemplate`. - -The class defined in `src/main/java/com/example/cockroachdemo/MyBatisConfiguration.java` configures the application to meet these requirements: - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/jeffgbutler/mybatis-cockroach-demo/master/src/main/java/com/example/cockroachdemo/MyBatisConfiguration.java %} -~~~ - -This class explicitly defines the batch `SqlSessionTemplate` (i.e., `batchSqlSessionTemplate`), and registers `batchmapper`, the batch mapper interface defined in [`src/main/java/com/example/cockroachdemo/batchmapper/BatchMapper.java`](#mappers) with `batchSqlSessionTemplate`. To complete the MyBatis configuration, the class also declares a `DataSource`, and defines the remaining `SqlSessionFactory` and `SqlSessionTemplate` beans. - -Note that a configuration class is not required for MyBatis-Spring-Boot-Starter applications that do not implement batch operations. - -### Data source - -`src/main/resources/application.yml` contains the metadata used to create a connection to the CockroachDB cluster: - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/jeffgbutler/mybatis-cockroach-demo/master/src/main/resources/application.yml %} -~~~ - -Spring Boot uses the application's `datasource` property [to auto-configure the database connection](https://docs.spring.io/spring-boot/docs/current/reference/html/spring-boot-features.html#boot-features-configure-datasource). This database connection configuration can be injected into the application's `SqlSessionFactoryBean`, as is explicitly done in the [MyBatisConfiguration](#configuration) configuration class definition. - -### Mappers - -All MyBatis applications require at least one mapper interface. These mappers take the place of manually-defined data access objects (DAOs). They provide other layers of the application an interface to the database. - -MyBatis-Spring-Boot-Starter usually scans the project for interfaces annotated with `@Mapper`, links the interfaces to a `SqlSessionTemplate`, and registers them with Spring so they can be [injected into the application's Spring beans](https://docs.spring.io/spring-boot/docs/current/reference/html/using-spring-boot.html#using-boot-spring-beans-and-dependency-injection). As mentioned in the [Configuration section](#configuration), because the application supports batch writes, the two mapper interfaces in the application are registered and linked manually in the `MyBatisConfiguration` configuration class definition. - -#### Account mapper - -`src/main/java/com/example/cockroachdemo/mapper/AccountMapper.java` defines the mapper interface to the `accounts` table using the [MyBatis Java API](https://mybatis.org/mybatis-3/java-api.html): - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/jeffgbutler/mybatis-cockroach-demo/master/src/main/java/com/example/cockroachdemo/mapper/AccountMapper.java %} -~~~ - -The `@Mapper` annotation declares the interface a mapper for MyBatis to scan. The SQL statement annotations on each of the interface methods map them to SQL queries. For example, the first method, `deleteAllAccounts()` is marked as a `DELETE` statement with the `@Delete` annotation. This method executes the SQL statement specified in the string passed to the annotation, "`delete from accounts`", which deletes all rows in the `accounts` table. - -#### Batch account mapper - -`src/main/java/com/example/cockroachdemo/batchmapper/BatchAccountMapper.java` defines a mapper interface for [batch writes](insert.html#performance-best-practices): - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/jeffgbutler/mybatis-cockroach-demo/master/src/main/java/com/example/cockroachdemo/batchmapper/BatchAccountMapper.java %} -~~~ - -This interface has a single `INSERT` statement query method, along with a method for flushing (i.e., executing) a batch of statements. - -### Services - -`src/main/java/com/example/cockroachdemo/service/AccountService.java` defines the service interface, with a number of methods for reading and writing to the database: - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/jeffgbutler/mybatis-cockroach-demo/master/src/main/java/com/example/cockroachdemo/service/AccountService.java %} -~~~ - -`MyBatisAccountService.java` implements the `AccountService` interface, using the mappers defined in [`AccountMapper.java` and `BatchAccountMapper.java`](#mappers), and the models defined in [`Account.java` and `BatchResults.java`](#models): - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/jeffgbutler/mybatis-cockroach-demo/master/src/main/java/com/example/cockroachdemo/service/MyBatisAccountService.java %} -~~~ - -Note that the public methods (i.e., the methods to be called by other classes in the project) are annotated as [`@Transactional`](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#transaction-declarative-annotations) methods. This ensures that all of the SQL statements executed in the data access layer are run within the context of a [database transaction](transactions.html) - -`@Transactional` takes a number of parameters, including a `propagation` parameter that determines the transaction propagation behavior around an object (i.e., at what point in the stack a transaction starts and ends). `propagation=REQUIRES_NEW` for the methods in the service layer, meaning that a new transaction must be created each time a request is made to the service layer. With this propagation behavior, the application follows the [entity-control-boundary (ECB) pattern](https://en.wikipedia.org/wiki/Entity-control-boundary), as the service boundaries determine where a [transaction](transactions.html) starts and ends rather than the lower-level query methods of the [mapper interfaces](#mappers). - -For more details on aspect-oriented transaction management in this application, [see below](#transaction-management). - -### Models - -Instances of the `Account` class, defined in `src/main/java/com/example/cockroachdemo/model/Account.java`, represent rows in the `accounts` table: - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/jeffgbutler/mybatis-cockroach-demo/master/src/main/java/com/example/cockroachdemo/model/Account.java %} -~~~ - -Instances of the `BatchResults` class, defined in `src/main/java/com/example/cockroachdemo/model/BatchResults.java`, hold metadata about a batch write operation and its results: - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/jeffgbutler/mybatis-cockroach-demo/master/src/main/java/com/example/cockroachdemo/model/BatchResults.java %} -~~~ - -### Transaction management - -MyBatis-Spring supports Spring's [declarative, aspect-oriented transaction management syntax](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#transaction-declarative), including the [`@Transactional`](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#transaction-declarative-annotations) annotation and [AspectJ's AOP annotations](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#transaction-declarative-aspectj). - -Transactions may require retries if they experience deadlock or [transaction contention](performance-best-practices-overview.html#transaction-contention) that cannot be resolved without allowing [serialization](demo-serializable.html) anomalies. To handle transactions that are aborted due to transient serialization errors, we highly recommend writing [client-side transaction retry logic](transactions.html#client-side-intervention) into applications written on CockroachDB. In this application, transaction retry logic is written into the methods of the `RetryableTransactionAspect` class, defined in `src/main/java/com/example/cockroachdemo/RetryableTransactionAspect.java`: - -{% include_cached copy-clipboard.html %} -~~~ java -{% remote_include https://raw.githubusercontent.com/jeffgbutler/mybatis-cockroach-demo/master/src/main/java/com/example/cockroachdemo/RetryableTransactionAspect.java %} -~~~ - -The [`@Aspect` annotation](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-at-aspectj) declares `RetryableTransactionAspect` an [aspect](https://en.wikipedia.org/wiki/Aspect_(computer_programming)), with [pointcut](https://en.wikipedia.org/wiki/Pointcut) and [advice](https://en.wikipedia.org/wiki/Advice_(programming)) methods. - -#### Transactional pointcut - -The [`@Pointcut` annotation](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-pointcuts) declares the `anyTransactionBoundaryOperation` method the pointcut for determining when to execute the aspect's advice. The `@annotation` [designator](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-pointcuts-designators) passed to the `@Pointcut` annotation limits the matches (i.e., [join points](https://en.wikipedia.org/wiki/Join_point)) to method calls with a specific annotation, in this case, `@Transactional`. - -#### Transaction retry advice - -`retryableOperation` handles the application retry logic, with [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff), as the advice to execute at an `anyTransactionBoundaryOperation(transactional)` join point. Spring supports [several different annotations to declare advice](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-advice). The [`@Around` annotation](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-ataspectj-around-advice) allows an advice method to work before and after the join point. It also gives the advice method control over executing any other matching advisors. - -`retryableOperation` first verifies that there is no active transaction. It then increments the retry count and attempts to proceed to the next advice method with the `ProceedingJoinPoint.proceed()` method. If the underlying data access layer method (i.e., the mapper interface method annotated with `@Transactional`) succeeds, the results are returned and the application flow continues. If the method fails, then the transaction is retried. The time between each retry grows with each retry until the maximum number of retries is reached. - -#### Advice ordering - -Spring automatically adds [transaction management advice](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#tx-decl-explained) to all methods annotated with `@Transactional`. Because the pointcut for `RetryableTransactionAspect` also matches methods annotated with `@Transactional`, there will always be two advisors that match the same pointcut. When multiple advisors match at the same pointcut, an `@Order` annotation on an advisor's aspect can specify the order in which the advice should be evaluated. - -To control when and how often a transaction is retried, the transaction retry advice must be executed outside the context of a transaction (i.e., it must be evaluated before the primary transaction management advisor). By default, the primary transaction management advisor is given the lowest level of precedence. The `@Order` annotation on `RetryableTransactionAspect` is passed `Ordered.LOWEST_PRECEDENCE-1`, which places this aspect's advice at a level of precedence above the primary transaction advisor, which results in the retry logic being evaluated before the transaction management advisor. - -For more details about advice ordering in Spring, see [Advice Ordering](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-ataspectj-advice-ordering) on the Spring documentation site. - -## See also - -Spring documentation: - -- [Spring Boot website](https://spring.io/projects/spring-boot) -- [Spring Framework Overview](https://docs.spring.io/spring/docs/current/spring-framework-reference/overview.html#overview) -- [Spring Core documentation](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#spring-core) -- [MyBatis documentation](https://mybatis.org/mybatis-3/) -- [MyBatis Spring integration](https://mybatis.org/spring/) - -CockroachDB documentation: - -- [Learn CockroachDB SQL](learn-cockroachdb-sql.html) -- [Client Connection Parameters](connection-parameters.html) -- [CockroachDB Developer Guide](developer-guide-overview.html) -- [Example Apps](example-apps.html) -- [Transactions](transactions.html) diff --git a/src/current/v22.1/build-a-typescript-app-with-cockroachdb.md b/src/current/v22.1/build-a-typescript-app-with-cockroachdb.md deleted file mode 100644 index 0d3cd418320..00000000000 --- a/src/current/v22.1/build-a-typescript-app-with-cockroachdb.md +++ /dev/null @@ -1,134 +0,0 @@ ---- -title: Build a TypeScript App with CockroachDB and TypeORM -summary: Learn how to use CockroachDB with the TypeORM framework. -toc: true -twitter: false -referral_id: docs_typescript_typeorm -docs_area: get_started ---- - -{% include {{ page.version.version }}/filter-tabs/crud-js.md %} - -This tutorial shows you how run a simple application built with [TypeORM](https://typeorm.io/#/). - -## Step 1. Start CockroachDB - -{% include {{ page.version.version }}/setup/sample-setup-parameters.md %} - -## Step 2. Get the code - -1. Clone [the code's GitHub repository](https://github.com/cockroachlabs/example-app-typescript-typeorm): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ git clone git@github.com:cockroachlabs/example-app-typescript-typeorm.git - ~~~ - -1. Navigate to the repo directory and install the application dependencies: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cd example-app-typescript-typeorm - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ npm install - ~~~ - -## Step 3. Configure your CockroachDB connection - -
                              - -1. Open the `datasource.ts` file, and comment out the `ssl: true`, `extra` and `options` configuration properties. - -1. In the `datasource.ts` file, uncomment `ssl: { rejectUnauthorized: false }`. - - {{site.data.alerts.callout_danger}} - Only use `ssl: { rejectUnauthorized: false }` in development, for insecure connections. - {{site.data.alerts.end}} - - The `DataSource` configuration should look similar to the following: - - ~~~ ts - export const AppDataSource = new DataSource({ - type: "cockroachdb", - url: process.env.DATABASE_URL, - ssl: { rejectUnauthorized: false }, // For insecure connections only - synchronize: true, - logging: false, - entities: ["src/entity/**/*.ts"], - migrations: ["src/migration/**/*.ts"], - subscribers: ["src/subscriber/**/*.ts"], - }) - ~~~ - -1. Set the `DATABASE_URL` environment variable to the connection string provided in the `cockroach` welcome text: - - - {% include_cached copy-clipboard.html %} - ~~~ shell - export DATABASE_URL="postgresql://root@localhost:26257?sslmode=disable" - ~~~ - -
                              - -
                              - -1. Set the `DATABASE_URL` environment variable to a CockroachDB connection string compatible with TypeORM. - - {% include_cached copy-clipboard.html %} - ~~~ shell - export DATABASE_URL="" - ~~~ - - TypeORM accepts the following format for CockroachDB {{ site.data.products.serverless }} connection strings: - - {% include_cached copy-clipboard.html %} - ~~~ - postgresql://:@:/ - ~~~ - -
                              - -## Step 4. Run the code - -Start the application: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ npm start -~~~ - -You should see the following output in your terminal: - -~~~ -Inserting a new account into the database... -Saved a new account. -Printing balances from account 1db0f34a-55e8-42e7-adf1-49e76010b763. -[ - Account { id: '1db0f34a-55e8-42e7-adf1-49e76010b763', balance: 1000 } -] -Inserting a new account into the database... -Saved a new account. -Printing balances from account 4e26653a-3821-48c8-a481-47eb73b3e4cc. -[ - Account { id: '4e26653a-3821-48c8-a481-47eb73b3e4cc', balance: 250 } -] -Transferring 500 from account 1db0f34a-55e8-42e7-adf1-49e76010b763 to account 4e26653a-3821-48c8-a481-47eb73b3e4cc. -Transfer complete. -Printing balances from account 1db0f34a-55e8-42e7-adf1-49e76010b763. -[ - Account { id: '1db0f34a-55e8-42e7-adf1-49e76010b763', balance: 1000 } -] -Printing balances from account 4e26653a-3821-48c8-a481-47eb73b3e4cc. -[ - Account { id: '4e26653a-3821-48c8-a481-47eb73b3e4cc', balance: 250 } -] -~~~ - -## What's next? - -Read more about using the [TypeORM](https://typeorm.io/#/). - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v22.1/bulk-delete-data.md b/src/current/v22.1/bulk-delete-data.md deleted file mode 100644 index 69dc5c0c981..00000000000 --- a/src/current/v22.1/bulk-delete-data.md +++ /dev/null @@ -1,569 +0,0 @@ ---- -title: Bulk-delete Data -summary: How to delete a large amount of data from a cluster -toc: true -docs_area: develop ---- - -There are several techniques to delete large amounts of data in CockroachDB. - -The simplest method is to use [TTL expressions](#batch-delete-expired-data) on rows to define when the data is expired, and then let CockroachDB handle the delete operations. - -To [manually delete a large number of rows](#manually-delete-data-using-batches) (i.e., tens of thousands of rows or more), we recommend iteratively deleting subsets of the rows that you want to delete, until all of the unwanted rows have been deleted. You can write a script to do this, or you can write a loop into your application. - -{{site.data.alerts.callout_success}} -If you want to delete all of the rows in a table (and not just a large subset of the rows), use a [`TRUNCATE` statement](#delete-all-of-the-rows-in-a-table). -{{site.data.alerts.end}} - -## Batch-delete "expired" data - -{% include {{page.version.version}}/sql/row-level-ttl.md %} - -For more information, see [Batch delete expired data with Row-Level TTL](row-level-ttl.html). - -## Manually delete data using batches - -This section provides guidance on batch deleting with the `DELETE` query filter [on an indexed column](#batch-delete-on-an-indexed-column) and [on a non-indexed column](#batch-delete-on-a-non-indexed-column). Filtering on an indexed column is both simpler to implement and more efficient, but adding an index to a table can slow down insertions to the table and may cause bottlenecks. Queries that filter on a non-indexed column must perform at least one full-table scan, a process that takes time proportional to the size of the entire table. - -{{site.data.alerts.callout_danger}} -Exercise caution when batch deleting rows from tables with foreign key constraints and explicit [`ON DELETE` foreign key actions](foreign-key.html#foreign-key-actions). To preserve `DELETE` performance on tables with foreign key actions, we recommend using smaller batch sizes, as additional rows updated or deleted due to `ON DELETE` actions can make batch loops significantly slower. -{{site.data.alerts.end}} - -### Before you begin - -Before reading this page, do the following: - -- [Create a CockroachDB {{ site.data.products.serverless }} cluster](../cockroachcloud/quickstart.html) or [start a local cluster](../cockroachcloud/quickstart.html?filters=local). -- [Install a Driver or ORM Framework](install-client-drivers.html). -- [Connect to the database](connect-to-the-database.html). -- [Insert data](insert-data.html) that you now want to delete. - - For the example on this page, we load a cluster with the [`tpcc` database](cockroach-workload.html#tpcc-workload) and data from [`cockroach workload`](cockroach-workload.html). - -### Batch delete on an indexed column - -For high-performance batch deletes, we recommending filtering the `DELETE` query on an [indexed column](indexes.html). - -{{site.data.alerts.callout_info}} -Having an indexed filtering column can make delete operations faster, but it might lead to bottlenecks in execution, especially if the filtering column is a [timestamp](timestamp.html). To reduce bottlenecks, we recommend using a [hash-sharded index](hash-sharded-indexes.html). -{{site.data.alerts.end}} - -Each iteration of a batch-delete loop should execute a transaction containing a single `DELETE` query. When writing this `DELETE` query: - -- Use a `WHERE` clause to filter on a column that identifies the unwanted rows. If the filtering column is not the primary key, the column should have [a secondary index](indexes.html). Note that if the filtering column is not already indexed, it is not beneficial to add an index just to speed up batch deletes. Instead, consider [batch deleting on non-indexed columns](#batch-delete-on-a-non-indexed-column). -- To ensure that rows are efficiently scanned in the `DELETE` query, add an [`ORDER BY`](order-by.html) clause on the filtering column. -- Use a [`LIMIT`](limit-offset.html) clause to limit the number of rows to the desired batch size. To determine the optimal batch size, try out different batch sizes (1,000 rows, 10,000 rows, 100,000 rows, etc.) and monitor the change in performance. -- Add a `RETURNING` clause to the end of the query that returns the filtering column values of the deleted rows. Then, using the values of the deleted rows, update the filter to match only the subset of remaining rows to delete. This narrows each query's scan to the fewest rows possible, and [preserves the performance of the deletes over time](delete.html#preserving-delete-performance-over-time). This pattern assumes that no new rows are generated that match on the `DELETE` filter during the time that it takes to perform the delete. - -#### Examples - -Choose the language for the example code. - -
                              - - - -
                              - -For example, suppose that you want to delete all rows in the [`tpcc`](cockroach-workload.html#tpcc-workload) `new_order` table where `no_w_id` is less than `5`, in batches of 5,000 rows. To do this, you can write a query that loops over batches of 5,000 rows, following the `DELETE` query guidance provided above. Note that in this case, `no_w_id` is the first column in the primary index, and, as a result, you do not need to create a secondary index on the column. - -
                              - -In Python using the psycopg2 driver, the script would look similar to the following: - -{% include_cached copy-clipboard.html %} -~~~ python -#!/usr/bin/env python3 - -import psycopg2 -import psycopg2.sql -import os - -conn = psycopg2.connect(os.environ.get('DATABASE_URL')) -filter = 4 -lastrow = None - -while True: - with conn: - with conn.cursor() as cur: - if lastrow: - filter = lastrow[0] - query = psycopg2.sql.SQL("DELETE FROM new_order WHERE no_w_id <= %s ORDER BY no_w_id DESC LIMIT 5000 RETURNING no_w_id") - cur.execute(query, (filter,)) - print(cur.statusmessage) - if cur.rowcount == 0: - break - lastrow = cur.fetchone() - -conn.close() -~~~ - -
                              - -
                              - -A simple JDBC application that bulk deletes rows in batches of 5000 would look like this: - -{% include_cached copy-clipboard.html %} -~~~ java -package com.cockroachlabs.bulkdelete; - -import java.sql.Connection; -import java.sql.PreparedStatement; -import java.sql.Statement; -import java.sql.ResultSet; -import java.sql.SQLException; -import java.util.Optional; - -import org.postgresql.ds.PGSimpleDataSource; - -public class App { - - public static void deleteData(Connection conn) { - boolean cont = true; - // the initial warehouse ID we will delete orders from - int warehouseId = 4; - try { - do { - System.out.println("Deleting data from warehouses <= to " + warehouseId); - String sql = "DELETE FROM new_order WHERE no_w_id <= ? ORDER BY no_w_id DESC LIMIT 5000 RETURNING no_w_id"; - // use a prepared statement and the current warehouse ID - PreparedStatement ps = conn.prepareStatement(sql, ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_READ_ONLY); - ps.setInt(1, warehouseId); - ResultSet results = ps.executeQuery(); - if (!results.next()) { - cont = false; - } else { - results.last(); - System.out.println("Deleted " + results.getRow() + " rows."); - // get the warehouse ID of the last row of this batch - warehouseId = results.getInt(1); - System.out.println("Warehouse ID is now " + warehouseId); - } - } while (cont); - } - catch(Exception e) { - return; - } - } - - public static void main(String[] args) throws SQLException { - // create the datasource for the JDBC driver - PGSimpleDataSource ds = new PGSimpleDataSource(); - ds.setApplicationName("docs_bulk_delete_java"); - // get the cluster JDBC URL from an environment variable - ds.setUrl(Optional.ofNullable(System.getenv("JDBC_DATABASE_URL")).orElseThrow( - () -> new IllegalArgumentException("JDBC_DATABASE_URL is not set."))); - try (Connection connection = ds.getConnection()) { - // call the method to perform the deletes - deleteData(connection); - } catch (SQLException e) { - e.printStackTrace(); - } - } -} -~~~ - -
                              - -
                              - -A simple C# Npgsql application that bulk deletes rows in batches of 5000 would look like this: - -{% include_cached copy-clipboard.html %} -~~~ csharp -using System; -using System.Data; -using System.Net.Security; -using Npgsql; - -namespace Cockroach -{ - class MainClass - { - static void Main(string[] args) - { - // create the connection string from the connection parameters - var connStringBuilder = new NpgsqlConnectionStringBuilder(); - connStringBuilder.Host = "cluster-name-743.g95.cockroachlabs.cloud"; - connStringBuilder.Port = 26257; - connStringBuilder.SslMode = SslMode.VerifyFull; - connStringBuilder.Username = "maxroach"; - connStringBuilder.Password = "notAGoodPassword"; - connStringBuilder.Database = "tpcc"; - connStringBuilder.ApplicationName = "docs_bulk_delete_csharp"; - // call the method to perform the deletes - DeleteRows(connStringBuilder.ConnectionString); - } - - static void DeleteRows(string connString) - { - // create the data source with the connection string - using var dataSource = NpgsqlDataSource.Create(connString); - { - using var connection = dataSource.OpenConnection(); - bool cont = true; - // the initial warehouse ID we will delete orders from - int warehouseId = 4; - do { - Console.WriteLine("Deleting data from warehouse <= to " + warehouseId); - using var cmd = new NpgsqlCommand("DELETE FROM new_order WHERE no_w_id <= (@p1) ORDER BY no_w_id DESC LIMIT 5000 RETURNING no_w_id", connection) - { - Parameters = - { - // using a prepared statement and the current warehouse ID - new("p1", warehouseId) - } - }; - using (var reader = cmd.ExecuteReader()) - { - if (reader.HasRows) - { - while (reader.Read()) - { - // Get the warehouse ID. When the result set is empty this will be - // the warehouse ID of the final row of this batch. - warehouseId = reader.GetInt32(0); - } - Console.WriteLine("Warehouse ID is now " + warehouseId); - } - else { - // All the rows have been deleted, so break out of the loop - cont = false; - } - Console.WriteLine("Deleted " + reader.RecordsAffected + " rows."); - } - } while (cont); - } - } - } -} -~~~ - -
                              - -This example iteratively deletes rows in batches of 5,000, until all of the rows where `no_w_id <= 4` are deleted. Note that at each iteration, the filter is updated to match a narrower subset of rows. - -### Batch delete on a non-indexed column - -If you cannot index the column that identifies the unwanted rows, we recommend defining the batch loop to execute separate read and write operations at each iteration: - -1. Execute a [`SELECT` query](selection-queries.html) that returns the primary key values for the rows that you want to delete. When writing the `SELECT` query: - - Use a `WHERE` clause that filters on the column identifying the rows. - - If you need to avoid [transaction contention](performance-best-practices-overview.html#transaction-contention) you can use an [`AS OF SYSTEM TIME` clause](as-of-system-time.html) at the end of the selection subquery, or run the selection query in a separate, read-only transaction with [`SET TRANSACTION AS OF SYSTEM TIME`](as-of-system-time.html#use-as-of-system-time-in-transactions). If you add an `AS OF SYSTEM TIME` clause, make sure your selection query to get the batches of rows is run outside of the window of the `AS OF SYSTEM TIME` clause. That is, if you use `AS OF SYSTEM TIME '-5s'` to find the rows to delete, you should wait at least 5 seconds before rerunning the select query. Otherwise you will retrieve rows that have already been deleted. - - Use a [`LIMIT`](limit-offset.html) clause to limit the number of rows queried to a subset of the rows that you want to delete. To determine the optimal `SELECT` batch size, try out different sizes (10,000 rows, 100,000 rows, 1,000,000 rows, etc.), and monitor the change in performance. Note that this `SELECT` batch size can be much larger than the batch size of rows that are deleted in the subsequent `DELETE` query. - - To ensure that rows are efficiently scanned in the subsequent `DELETE` query, include an [`ORDER BY`](order-by.html) clause on the primary key. - -1. Write a nested `DELETE` loop over the primary key values returned by the `SELECT` query, in batches smaller than the initial `SELECT` batch size. To determine the optimal `DELETE` batch size, try out different sizes (1,000 rows, 10,000 rows, 100,000 rows, etc.), and monitor the change in performance. Where possible, we recommend executing each `DELETE` in a separate transaction. - -For example, suppose that you want to delete all rows in the [`tpcc`](cockroach-workload.html#tpcc-workload) `history` table that are older than a month. You can create a script that loops over the data and deletes unwanted rows in batches, following the query guidance provided above. - -#### Examples - -Choose the language for the example code. - -
                              - - - -
                              - -
                              - -In Python, the script would look similar to the following: - -{% include_cached copy-clipboard.html %} -~~~ python -#!/usr/bin/env python3 - -import psycopg2 -import os -import time -import logging - - -def main(): - try: - dsn = os.environ.get("DATABASE_URL") - conn = psycopg2.connect(dsn) - except Exception as e: - logging.fatal("database connection failed") - logging.fatal(e) - exit - - while True: - with conn: - with conn.cursor() as cur: - cur.execute("SET TRANSACTION AS OF SYSTEM TIME '-5s'") - cur.execute("SELECT h_w_id, rowid FROM tpcc.history WHERE h_date < current_date() - INTERVAL '1 MONTH' ORDER BY h_w_id, rowid LIMIT 20000") - pkvals = list(cur) - if not pkvals: - return - while pkvals: - batch = pkvals[:5000] - pkvals = pkvals[5000:] - with conn: - with conn.cursor() as cur: - cur.execute("DELETE FROM tpcc.history WHERE (h_w_id, rowid) = ANY %s", (batch,)) - print(cur.statusmessage) - del batch - del pkvals - time.sleep(5) - - conn.close() - -if __name__ == "__main__": - main() -~~~ - -
                              - -
                              - -In Java, the code would look similar to: - -{% include_cached copy-clipboard.html %} -~~~ java -public static void deleteDataNonindexed(Connection conn) { - boolean cont = true; - try { - do { - // select the rows using the primary key - String select = "SELECT h_w_id, rowid FROM history WHERE h_date < current_date() - INTERVAL '1 MONTH' ORDER BY h_w_id, rowid LIMIT 20000"; - Statement st = conn.createStatement(); - ResultSet results = st.executeQuery(select); - List pkeys = new ArrayList<>(); - if (!results.isBeforeFirst()) { - cont = false; - } else { - System.out.println("Found results, deleting rows."); - while (results.next()) { - KeyFields kf = new KeyFields(); - kf.hwid = results.getInt(1); - kf.rowId = UUID.fromString(results.getString(2)); - pkeys.add(kf); - } - } - while (pkeys.size() > 0) { - // check the size of the list of primary keys - // if it is smaller than the batch size, set the last - // index of the batch size to the size of the list - int size = pkeys.size(); - int lastIndex; - if (size > 5000) { - lastIndex = 5000; - } else { - lastIndex = size; - } - // slice the list of primary keys to the batch size - String pkeyList = String.join(",", pkeys.subList(0, lastIndex).toString()); - String deleteStatement = new String("DELETE FROM history WHERE (h_w_id, rowid) = ANY ( ARRAY " + pkeyList + ")"); - int deleteCount = conn.createStatement().executeUpdate(deleteStatement); - System.out.println("Deleted " + deleteCount + " rows."); - // remove the deleted rows primary keys - pkeys.subList(0, lastIndex).clear(); - } - } while (cont); - } catch(Exception e) { - e.printStackTrace(); - return; - } -} - -// inner class to store the primary key data -public static class KeyFields { - public int hwid; - public UUID rowId; - - @Override - public String toString() { - return "( " + hwid + ", '" + rowId.toString() + "' )"; - } -} -~~~ - -The `KeyFields` class encapsulates the compound primary key for the `history` table, and is used in the typed collection of primary keys returned by the `SELECT` query. - -
                              - -
                              - -In C# the code would look similar to: - -{% include_cached copy-clipboard.html %} -~~~ csharp -public class KeyFields { - public Int32 hwid; - public Guid rowId; - - public override String ToString() - { - return "( " + hwid.ToString() + ", '" + rowId.ToString() + "' )"; - } - -} - -static void DeleteRows(string connString) -{ - using var dataSource = NpgsqlDataSource.Create(connString); - { - using var connection = dataSource.OpenConnection(); - bool cont = true; - do - { - using (var cmdSelect = new NpgsqlCommand("SELECT h_w_id, rowid FROM history WHERE h_date < current_date() - INTERVAL '1 MONTH' ORDER BY h_w_id, rowid LIMIT 20000", connection)) - { - List pkeys = new List(); - using (var reader = cmdSelect.ExecuteReader()) - { - if (reader.HasRows) - { - while (reader.Read()) - { - KeyFields kf = new KeyFields(); - kf.hwid = reader.GetInt32(0); - kf.rowId = reader.GetGuid(1); - pkeys.Add(kf); - } - } - else - { - // All the rows have been deleted, so break out of the loop - cont = false; - } - } - while (pkeys.Count > 0) { - // get the size of the list of primary keys - // if it is smaller than the batch size, set the last - // index of the batch size to the size of the list - Int32 size = pkeys.Count; - Int32 lastIndex; - if (size > 5000) - { - lastIndex = 5000; - } else - { - lastIndex = size; - } - List batch = pkeys.GetRange(0, lastIndex); - String pkeyList = String.Join(',', batch); - String deleteStatement = new String("DELETE FROM history WHERE (h_w_id, rowid) = ANY ( ARRAY [ " + pkeyList + "])"); - using (var cmdDelete = new NpgsqlCommand(deleteStatement, connection)) - { - Int32 deleteCount = cmdDelete.ExecuteNonQuery(); - Console.WriteLine("Deleted " + deleteCount + " rows."); - } - pkeys.RemoveRange(0, lastIndex); - } - } - } while (cont); - } -} -~~~ - -The `KeyFields` class encapsulates the compound primary key for the `history` table, and is used in the typed collection of primary keys returned by the `SELECT` query. - -
                              - -At each iteration, the selection query returns the primary key values of up to 20,000 rows of matching historical data from 5 seconds in the past, in a read-only transaction. Then, a nested loop iterates over the returned primary key values in smaller batches of 5,000 rows. At each iteration of the nested `DELETE` loop, a batch of rows is deleted. After the nested `DELETE` loop deletes all of the rows from the initial selection query, a time delay ensures that the next selection query reads historical data from the table after the last iteration's `DELETE` final delete. - -{{site.data.alerts.callout_info}} -CockroachDB records the timestamp of each row created in a table in the `crdb_internal_mvcc_timestamp` metadata column. In the absence of an explicit timestamp column in your table, you can use `crdb_internal_mvcc_timestamp` to filter expired data. - -`crdb_internal_mvcc_timestamp` cannot be indexed. If you plan to use `crdb_internal_mvcc_timestamp` as a filter for large deletes, you must follow the [non-indexed column pattern](#batch-delete-on-a-non-indexed-column). - -**Exercise caution when using `crdb_internal_mvcc_timestamp` in production, as the column is subject to change without prior notice in new releases of CockroachDB. Instead, we recommend creating a column with an [`ON UPDATE` expression](create-table.html#on-update-expressions) to avoid any conflicts due to internal changes to `crdb_internal_mvcc_timestamp`.** -{{site.data.alerts.end}} - -### Delete all of the rows in a table - -To delete all of the rows in a table, use a [`TRUNCATE` statement](truncate.html). - -For example, to delete all rows in the [`tpcc`](cockroach-workload.html#tpcc-workload) `new_order` table, execute the following SQL statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -TRUNCATE new_order; -~~~ - -You can execute the statement from a compatible SQL client (e.g., the [CockroachDB SQL client](cockroach-sql.html)), or in a script or application. - -#### Examples - -Choose the language for the example code. - -
                              - - - -
                              - -
                              - -For example, in Python, using the `psycopg2` client driver: - -{% include_cached copy-clipboard.html %} -~~~ python -#!/usr/bin/env python3 - -import psycopg2 -import os - -conn = psycopg2.connect(os.environ.get('DB_URI')) - -with conn: - with conn.cursor() as cur: - cur.execute("TRUNCATE new_order") -~~~ - -
                              - -
                              - -In Java, the code would look similar to this: - -{% include_cached copy-clipboard.html %} -~~~ java -public static void truncateTable (Connection conn) { - try { - int results = conn.createStatement().executeUpdate("TRUNCATE new_order"); - System.out.println("Truncated table new_order. Result: " + results); - } catch (Exception e) { - e.printStackTrace(); - return; - } -} -~~~ - -
                              - -
                              - -In C# the code would look similar to this: - -{% include_cached copy-clipboard.html %} -~~~ csharp -static void TruncateTable(string connString) -{ - using var dataSource = NpgsqlDataSource.Create(connString); - using var connection = dataSource.OpenConnection(); - using (var cmdTruncate = new NpgsqlCommand("TRUNCATE new_order", connection)) - { - Int32 results = cmdTruncate.ExecuteNonQuery(); - Console.WriteLine("Dropped " + results + " rows in new_order"); - } -} -~~~ - -
                              - -{{site.data.alerts.callout_success}} -For detailed reference documentation on the `TRUNCATE` statement, including additional examples, see the [`TRUNCATE` syntax page](truncate.html). -{{site.data.alerts.end}} - -## See also - -- [Delete data](delete-data.html) -- [Batch Delete Expired Data with Row-Level TTL](row-level-ttl.html) -- [`DELETE`](delete.html) -- [`TRUNCATE`](truncate.html) diff --git a/src/current/v22.1/bulk-update-data.md b/src/current/v22.1/bulk-update-data.md deleted file mode 100644 index be5016841ac..00000000000 --- a/src/current/v22.1/bulk-update-data.md +++ /dev/null @@ -1,114 +0,0 @@ ---- -title: Bulk-update Data -summary: How to to update a large amount of data using batch-update loops. -toc: true -docs_area: develop ---- - -To update multiple rows in a table, you can use a single [`UPDATE` statement](update.html), with a `WHERE` clause that filters the rows you want to update. - -To update a large number of rows (i.e., tens of thousands of rows or more), we recommend iteratively updating subsets of the rows that you want to update, until all of the rows have been updated. You can write a script to do this, or you can write a loop into your application. - -This page provides guidance on writing batch-update loops with a pattern that executes `SELECT` and `UPDATE` statements at different levels of a nested loop. - -{{site.data.alerts.callout_danger}} -Exercise caution when batch-updating rows from tables with foreign key constraints and explicit [`ON UPDATE` foreign key actions](foreign-key.html#foreign-key-actions). To preserve `UPDATE` performance on tables with foreign key actions, we recommend using smaller batch sizes, as additional rows updated due to `ON UPDATE` actions can make batch loops significantly slower. -{{site.data.alerts.end}} - -## Before you begin - -Before reading this page, do the following: - -- [Create a CockroachDB {{ site.data.products.serverless }} cluster](../cockroachcloud/quickstart.html) or [start a local cluster](../cockroachcloud/quickstart.html?filters=local). -- [Install a Driver or ORM Framework](install-client-drivers.html). - - For the example on this page, we use the `psycopg2` Python driver. -- [Connect to the database](connect-to-the-database.html). -- [Insert data](insert-data.html) that you now want to update. - - For the example on this page, we load a cluster with the `movr` database and data from [`cockroach workload`](cockroach-workload.html). - -## Write a batch-update loop - -1. At the top level of a loop in your application, or in a script, execute a [`SELECT`](selection-queries.html) query that returns a large batch of primary key values for the rows that you want to update. When defining the `SELECT` query: - - Use a `WHERE` clause to filter on columns that identify the rows that you want to update. This clause should also filter out the rows that have been updated by previous iterations of the nested `UPDATE` loop: - - For optimal performance, the first condition of the filter should evaluate the last primary key value returned by the last `UPDATE` query that was executed. This narrows each `SELECT` query's scan to the fewest rows possible, and preserves the performance of the row updates over time. - - Another condition of the filter should evaluate column values persisted to the database that signal whether or not a row has been updated. This prevents rows from being updated more than once, in the event that the application or script crashes and needs to be restarted. If there is no way to distinguish between an updated row and a row that has not yet been updated, you might need to [add a new column to the table](add-column.html) (e.g., `ALTER TABLE ... ADD COLUMN updated BOOL;`). - - Add an [`AS OF SYSTEM TIME` clause](as-of-system-time.html) to the end of the selection subquery, or run the selection query in a separate, read-only transaction with [`SET TRANSACTION AS OF SYSTEM TIME`](as-of-system-time.html#use-as-of-system-time-in-transactions). This helps to reduce [transaction contention](transactions.html#transaction-contention). - - Use a [`LIMIT`](limit-offset.html) clause to limit the number of rows queried to a subset of the rows that you want to update. To determine the optimal `SELECT` batch size, try out different sizes (10,000 rows, 20,000 rows, etc.), and monitor the change in performance. Note that this `SELECT` batch size can be much larger than the batch size of rows that are updated in the subsequent `UPDATE` query. - - To ensure that rows are efficiently scanned in the subsequent `UPDATE` query, include an [`ORDER BY`](order-by.html) clause on the primary key. - -1. Under the `SELECT` query, write a nested loop that executes `UPDATE` queries over the primary key values returned by the `SELECT` query, in batches smaller than the initial `SELECT` batch size. When defining the `UPDATE` query: - - Use a `WHERE` clause that filters on a subset of the primary key values returned by the top-level `SELECT` query. To determine the optimal `UPDATE` batch size, try out different sizes (1,000 rows, 2,000 rows, etc.), and monitor the change in performance. - - Make sure that the `UPDATE` query updates a column that signals whether or not the row has been updated. This column might be different from the column whose values you want to update. - - Add a `RETURNING` clause to the end of the query that returns the primary key values of the rows being updated. The `WHERE` clause in the top-level `SELECT` query should filter out the primary key value of the last row that was updated, using the values returned by the last `UPDATE` query executed. - - Where possible, we recommend executing each `UPDATE` in a separate transaction. - -## Example - -Suppose that over the past year, you've recorded hundreds of thousands of [MovR](movr.html) rides in a cluster loaded with the [`movr`](cockroach-workload.html) database. And suppose that, for the last week of December, you applied a 10% discount to all ride charges billed to users, but you didn't update the `rides` table to reflect the discounts. - -To get the `rides` table up-to-date, you can create a loop that updates the relevant rows of the `rides` table in batches, following the query guidance provided [above](#write-a-batch-update-loop). - -In this case, you will also need to add a new column to the `rides` table that signals whether or not a row has been updated. Using this column, the top-level `SELECT` query can filter out rows that have already been updated, which will prevent rows from being updated more than once if the script crashes. - -For example, you could create a column named `discounted`, of data type [`BOOL`](bool.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER TABLE rides ADD COLUMN discounted BOOL DEFAULT false; -~~~ - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -In Python, a batch-update script might look similar to the following: - -{% include_cached copy-clipboard.html %} -~~~ python -#!/usr/bin/env python3 - -import psycopg2 -import os -import time - -def main(): - conn = psycopg2.connect(os.environ.get('DB_URI')) - lastid = None - - while True: - with conn: - with conn.cursor() as cur: - cur.execute("SET TRANSACTION AS OF SYSTEM TIME '-5s'") - if lastid: - cur.execute("SELECT id FROM rides WHERE id > %s AND discounted != true AND extract('month', start_time) = 12 AND extract('day', start_time) > 23 ORDER BY id LIMIT 10000", (lastid,)) - else: - cur.execute("SELECT id FROM rides WHERE discounted != true AND extract('month', start_time) = 12 AND extract('day', start_time) > 23 ORDER BY id LIMIT 10000") - pkvals = list(cur) - if not pkvals: - return - while pkvals: - batch = pkvals[:2000] - pkvals = pkvals[2000:] - with conn: - with conn.cursor() as cur: - cur.execute("UPDATE rides SET discounted = true, revenue = revenue*.9 WHERE id = ANY %s RETURNING id", (batch,)) - print(cur.statusmessage) - if not pkvals: - lastid = cur.fetchone()[0] - del batch - del pkvals - time.sleep(5) - - conn.close() -if __name__ == '__main__': - main() -~~~ - -At each iteration, the `SELECT` query returns the primary key values of up to 10,000 rows of matching historical data from 5 seconds in the past, in a read-only transaction. Then, a nested loop iterates over the returned primary key values in smaller batches of 2,000 rows. At each iteration of the nested `UPDATE` loop, a batch of rows is updated. After the nested `UPDATE` loop updates all of the rows from the initial selection query, a time delay ensures that the next selection query reads historical data from the table after the last iteration's `UPDATE` final update. - -Note that the last iteration of the nested loop assigns the primary key value of the last row updated to the `lastid` variable. The next `SELECT` query uses this variable to decrease the number of rows scanned by the number of rows updated in the last iteration of the loop. - -## See also - -- [Update data](update-data.html) -- [`UPDATE`](update.html) diff --git a/src/current/v22.1/bytes.md b/src/current/v22.1/bytes.md deleted file mode 100644 index 1522257bf57..00000000000 --- a/src/current/v22.1/bytes.md +++ /dev/null @@ -1,135 +0,0 @@ ---- -title: BYTES -summary: The BYTES data type stores binary strings of variable length. -toc: true -docs_area: reference.sql ---- - -The `BYTES` [data type](data-types.html) stores binary strings of variable length. - - -## Aliases - -In CockroachDB, the following are aliases for `BYTES`: - -- `BYTEA` -- `BLOB` - -## Syntax - -To express a byte array constant, see the section on -[byte array literals](sql-constants.html#byte-array-literals) for more -details. For example, the following three are equivalent literals for the same -byte array: `b'abc'`, `b'\141\142\143'`, `b'\x61\x62\x63'`. - -In addition to this syntax, CockroachDB also supports using -[string literals](sql-constants.html#string-literals), including the -syntax `'...'`, `e'...'` and `x'....'` in contexts where a byte array -is otherwise expected. - -## Size - -The size of a `BYTES` value is variable, but it's recommended to keep values under 1 MB to ensure adequate performance. Above that threshold, [write amplification](architecture/storage-layer.html#write-amplification) and other considerations may cause significant performance degradation. - -{{site.data.alerts.callout_danger}} -{% include {{page.version.version}}/sql/add-size-limits-to-indexed-columns.md %} -{{site.data.alerts.end}} - -{{site.data.alerts.callout_success}} -If your application requires large binary input in single queries, you can store the blobs somewhere your client can access them (using a cloud storage service, for example), and then reference their addresses from a statement. -{{site.data.alerts.end}} - -## Example - -~~~ sql -> CREATE TABLE bytes (a INT PRIMARY KEY, b BYTES); - -> -- explicitly typed BYTES literals -> INSERT INTO bytes VALUES (1, b'\141\142\143'), (2, b'\x61\x62\x63'), (3, b'\141\x62\c'); - -> -- string literal implicitly typed as BYTES -> INSERT INTO bytes VALUES (4, 'abc'); - - -> SELECT * FROM bytes; -~~~ -~~~ -+---+-----+ -| a | b | -+---+-----+ -| 1 | abc | -| 2 | abc | -| 3 | abc | -| 4 | abc | -+---+-----+ -(4 rows) -~~~ - -## Supported conversions - -`BYTES` values can be -[cast](data-types.html#data-type-conversions-and-casts) explicitly to -[`STRING`](string.html). This conversion always succeeds. Two -conversion modes are supported, controlled by the -[session variable](set-vars.html#supported-variables) `bytea_output`: - -- `hex` (default): The output of the conversion starts with the two - characters `\`, `x` and the rest of the string is composed by the - hexadecimal encoding of each byte in the input. For example, - `x'48AA'::STRING` produces `'\x48AA'`. - -- `escape`: The output of the conversion contains each byte in the - input, as-is if it is an ASCII character, or encoded using the octal - escape format `\NNN` otherwise. For example, `x'48AA'::STRING` - produces `'0\252'`. - -`STRING` values can be cast explicitly to `BYTES`. This conversion -will fail if the hexadecimal digits are not valid, or if there is an -odd number of them. Two conversion modes are supported: - -- If the string starts with the two special characters `\` and `x` - (e.g., `\xAABB`), the rest of the string is interpreted as a sequence - of hexadecimal digits. The string is then converted to a byte array - where each pair of hexadecimal digits is converted to one byte. - -- Otherwise, the string is converted to a byte array that contains its - UTF-8 encoding. - -### `STRING` vs. `BYTES` - -While both `STRING` and `BYTES` can appear to have similar behavior in many situations, one should understand their nuance before casting one into the other. - -`STRING` treats all of its data as characters, or more specifically, Unicode code points. `BYTES` treats all of its data as a byte string. This difference in implementation can lead to dramatically different behavior. For example, let's take a complex Unicode character such as ☃ ([the snowman emoji](https://emojipedia.org/snowman/)): - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT length('☃'::string); -~~~ - -~~~ - length -+--------+ - 1 -~~~ - -~~~ sql -> SELECT length('☃'::bytes); -~~~ -~~~ - length -+--------+ - 3 -~~~ - -In this case, [`LENGTH(string)`](functions-and-operators.html#string-and-byte-functions) measures the number of Unicode code points present in the string, whereas [`LENGTH(bytes)`](functions-and-operators.html#string-and-byte-functions) measures the number of bytes required to store that value. Each character (or Unicode code point) can be encoded using multiple bytes, hence the difference in output between the two. - -#### Translating literals to `STRING` vs. `BYTES` - -A literal entered through a SQL client will be translated into a different value based on the type: - -+ `BYTES` give a special meaning to the pair `\x` at the beginning, and translates the rest by substituting pairs of hexadecimal digits to a single byte. For example, `\xff` is equivalent to a single byte with the value of 255. For more information, see [SQL Constants: String literals with character escapes](sql-constants.html#string-literals-with-character-escapes). -+ `STRING` does not give a special meaning to `\x`, so all characters are treated as distinct Unicode code points. For example, `\xff` is treated as a `STRING` with length 4 (`\`, `x`, `f`, and `f`). - -## See also - -[Data Types](data-types.html) diff --git a/src/current/v22.1/cancel-job.md b/src/current/v22.1/cancel-job.md deleted file mode 100644 index 84e630daec1..00000000000 --- a/src/current/v22.1/cancel-job.md +++ /dev/null @@ -1,109 +0,0 @@ ---- -title: CANCEL JOB -summary: The CANCEL JOB statement stops long-running jobs such as imports, backups, and schema changes.such as imports, backups, and schema changes. -toc: true -docs_area: reference.sql ---- - -The `CANCEL JOB` [statement](sql-statements.html) lets you stop long-running jobs, which include: - -- [`IMPORT`](import.html) jobs -- [`BACKUP`](backup.html) and [`RESTORE`](restore.html) jobs -- [User-created table statistics](create-statistics.html) jobs -- [Automatic table statistics](cost-based-optimizer.html#table-statistics) jobs -- [Changefeeds](create-changefeed.html) -- [Scheduled backup](manage-a-backup-schedule.html) jobs -- [Schema change](online-schema-changes.html) jobs (see [Limitations](#limitations) for exceptions) - -## Limitations - -- When an Enterprise [`RESTORE`](restore.html) is canceled, partially restored data is properly cleaned up. This can have a minor, temporary impact on cluster performance. -- To avoid transaction states that cannot properly [roll back](rollback-transaction.html), `DROP` statements (e.g., [`DROP TABLE`](drop-table.html)), `ALTER ... RENAME` statements (e.g., [`ALTER TABLE ... RENAME TO`](rename-table.html)), and [`CREATE TABLE ... AS`](create-table-as.html) statements are no longer cancellable. - -## Required privileges - -To cancel a job, the user must be a member of the `admin` role or must have the [`CONTROLJOB`](create-user.html#create-a-user-that-can-pause-resume-and-cancel-non-admin-jobs) parameter set. Non-admin users cannot cancel admin users' jobs. - -## Synopsis - -
                              -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/cancel_job.html %} -
                              - -## Parameters - -Parameter | Description -----------|------------ -`job_id` | The ID of the job you want to cancel, which can be found with [`SHOW JOBS`](show-jobs.html). -`select_stmt` | A [selection query](selection-queries.html) that returns `job_id`(s) to cancel. -`for_schedules_clause` | The schedule you want to cancel jobs for. You can cancel jobs for a specific schedule (`FOR SCHEDULE id`) or cancel jobs for multiple schedules by nesting a [`SELECT` clause](select-clause.html) in the statement (`FOR SCHEDULES `). See the [examples](#cancel-jobs-for-a-schedule) below. - -## Examples - -### Cancel a single job - -~~~ sql -> SHOW JOBS; -~~~ -~~~ -+----------------+---------+-------------------------------------------+... -| id | type | description |... -+----------------+---------+-------------------------------------------+... -| 27536791415282 | RESTORE | RESTORE db.* FROM 'azure://backup/db/tbl' |... -+----------------+---------+-------------------------------------------+... -~~~ -~~~ sql -> CANCEL JOB 27536791415282; -~~~ - -### Cancel multiple jobs - -To cancel multiple jobs, nest a [`SELECT` clause](select-clause.html) that retrieves `job_id`(s) inside the `CANCEL JOBS` statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CANCEL JOBS (WITH x AS (SHOW JOBS) SELECT job_id FROM x - WHERE user_name = 'maxroach'); -~~~ - -All jobs created by `maxroach` will be cancelled. - -### Cancel automatic table statistics jobs - -Canceling an automatic table statistics job is not useful since the system will automatically restart the job immediately. To permanently disable automatic table statistics jobs, disable the `sql.stats.automatic_collection.enabled` [cluster setting](cluster-settings.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING sql.stats.automatic_collection.enabled = false; -~~~ - -### Cancel jobs for a schedule - - To cancel jobs for a specific [backup schedule](create-schedule-for-backup.html), use the schedule's `id`: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CANCEL JOBS FOR SCHEDULE 590204387299262465; -~~~ -~~~ -CANCEL JOBS FOR SCHEDULES 1 -~~~ - -You can also CANCEL multiple schedules by nesting a [`SELECT` clause](select-clause.html) that retrieves `id`(s) inside the `CANCEL JOBS` statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CANCEL JOBS FOR SCHEDULES WITH x AS (SHOW SCHEDULES) SELECT id FROM x WHERE label = 'test_schedule'; -~~~ - -~~~ -CANCEL JOBS FOR SCHEDULES 2 -~~~ - -## See also - -- [`SHOW JOBS`](show-jobs.html) -- [`BACKUP`](backup.html) -- [`RESTORE`](restore.html) -- [`IMPORT`](import.html) -- [`CREATE CHANGEFEED`](create-changefeed.html) diff --git a/src/current/v22.1/cancel-query.md b/src/current/v22.1/cancel-query.md deleted file mode 100644 index 2164630e764..00000000000 --- a/src/current/v22.1/cancel-query.md +++ /dev/null @@ -1,88 +0,0 @@ ---- -title: CANCEL QUERY -summary: The CANCEL QUERY statement cancels a running SQL query. -toc: true -docs_area: reference.sql ---- - -The `CANCEL QUERY` [statement](sql-statements.html) cancels a running SQL query. - - -## Considerations - -- Schema changes are treated differently than other SQL queries. You can use SHOW JOBS to monitor the progress of schema changes and CANCEL JOB to cancel schema changes that are taking longer than expected. -- In rare cases where a query is close to completion when a cancellation request is issued, the query may run to completion. -- In addition to the `CANCEL QUERY` statement, CockroachDB also supports query cancellation by [client drivers and ORMs](install-client-drivers.html) using the PostgreSQL wire protocol (pgwire). This allows CockroachDB to stop executing queries that your application is no longer waiting for, thereby reducing load on the cluster. pgwire query cancellation differs from the `CANCEL QUERY` statement in the following ways: - - It is how most client drivers and ORMS implement query cancellation. For example, it is [used by PGJDBC](https://github.com/pgjdbc/pgjdbc/blob/3a54d28e0b416a84353d85e73a23180a6719435e/pgjdbc/src/main/java/org/postgresql/core/QueryExecutorBase.java#L171) to implement the [`setQueryTimeout` method](https://jdbc.postgresql.org/documentation/publicapi/org/postgresql/jdbc/PgStatement.html#setQueryTimeout-int-). - - The cancellation request is sent over a different network connection than is used by SQL connections. - - If there are too many unsuccessful cancellation attempts, CockroachDB will start rejecting pgwire cancellations. - -## Required privileges - -Members of the `admin` role (include `root`, which belongs to `admin` by default) can cancel any currently active queries. User that are not members of the `admin` role can cancel only their own currently active queries. To view and cancel another non-admin user's query, the user must be a member of the `admin` role or must have the [`VIEWACTIVITY`](create-user.html#create-a-user-that-can-see-and-cancel-non-admin-queries-and-sessions) and [`CANCELQUERY`](create-user.html#create-a-user-that-can-see-and-cancel-non-admin-queries-and-sessions) parameters set. - -## Synopsis - -
                              -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/cancel_query.html %} -
                              - -## Parameters - -Parameter | Description -----------|------------ -`query_id` | A [scalar expression](scalar-expressions.html) that produces the ID of the query to cancel.

                              `CANCEL QUERY` accepts a single query ID. If a subquery is used and returns multiple IDs, the `CANCEL QUERY` statement will fail. To cancel multiple queries, use `CANCEL QUERIES`. -`select_stmt` | A [selection query](selection-queries.html) whose result you want to cancel. - -## Response - -When a query is successfully cancelled, CockroachDB sends a `query execution canceled` error to the client that issued the query. - -- If the canceled query was a single, stand-alone statement, no further action is required by the client. -- If the canceled query was part of a larger, multi-statement [transaction](transactions.html), the client should then issue a [`ROLLBACK`](rollback-transaction.html) statement. - -## Examples - -### Cancel a query via the query ID - -In this example, we use the [`SHOW STATEMENTS`](show-statements.html) statement to get the ID of a query and then pass the ID into the `CANCEL QUERY` statement: - -~~~ sql -> SHOW STATEMENTS; -~~~ - -~~~ - query_id | node_id | session_id | user_name | start | query | client_address | application_name | distributed | phase ------------------------------------+---------+----------------------------------+-----------+-------------------------------------+--------------------------------------+-----------------+------------------+-------------+------------ - 1673f58fca5301900000000000000001 | 1 | 1673f583067d51280000000000000001 | demo | 2021-04-08 18:31:29.079614+00:00:00 | SELECT * FROM rides ORDER BY revenue | 127.0.0.1:55212 | $ cockroach demo | true | executing - 1673f590433eaa000000000000000001 | 1 | 1673f58a4ba3c8e80000000000000001 | demo | 2021-04-08 18:31:31.108372+00:00:00 | SHOW CLUSTER STATEMENTS | 127.0.0.1:55215 | $ cockroach sql | false | executing -(2 rows) -~~~ - -~~~ sql -> CANCEL QUERY '1673f590433eaa000000000000000001'; -~~~ - -### Cancel a query via a subquery - -In this example, we nest a [`SELECT` clause](select-clause.html) that retrieves the ID of a query inside the `CANCEL QUERY` statement: - -~~~ sql -> CANCEL QUERY (WITH x AS (SHOW CLUSTER STATEMENTS) SELECT query_id FROM x - WHERE client_address = '127.0.0.1:55212' - AND user_name = 'demo' - AND query = 'SELECT * FROM rides ORDER BY revenue'); -~~~ - -~~~ -CANCEL QUERIES 1 -~~~ - -{{site.data.alerts.callout_info}}CANCEL QUERY accepts a single query ID. If a subquery is used and returns multiple IDs, the CANCEL QUERY statement will fail. To cancel multiple queries, use CANCEL QUERIES.{{site.data.alerts.end}} - -## See also - -- [Manage Long-Running Queries](manage-long-running-queries.html) -- [`SHOW STATEMENTS`](show-statements.html) -- [`CANCEL SESSION`](cancel-session.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v22.1/cancel-session.md b/src/current/v22.1/cancel-session.md deleted file mode 100644 index 3feefac73eb..00000000000 --- a/src/current/v22.1/cancel-session.md +++ /dev/null @@ -1,96 +0,0 @@ ---- -title: CANCEL SESSION -summary: The CANCEL SESSION statement stops long-running sessions. -toc: true -docs_area: reference.sql ---- - -The `CANCEL SESSION` [statement](sql-statements.html) lets you stop long-running sessions. `CANCEL SESSION` will attempt to cancel the currently active query and end the session. - - -## Required privileges - -To view and cancel a session, the user must be a member of the `admin` role or must have the [`VIEWACTIVITY`](create-user.html#create-a-user-that-can-see-and-cancel-non-admin-queries-and-sessions) and [`CANCELQUERY`](create-user.html#create-a-user-that-can-see-and-cancel-non-admin-queries-and-sessions) parameters set. Non-admin users cannot cancel admin users' sessions. - -## Synopsis - -
                              -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/cancel_session.html %} -
                              - -## Parameters - -Parameter | Description -----------|------------ -`session_id` | The ID of the session you want to cancel, which can be found with [`SHOW SESSIONS`](show-sessions.html).

                              `CANCEL SESSION` accepts a single session ID. If a subquery is used and returns multiple IDs, the `CANCEL SESSION` statement will fail. To cancel multiple sessions, use `CANCEL SESSIONS`. -`select_stmt` | A [selection query](selection-queries.html) that returns `session_id`(s) to cancel. - -## Example - -### Cancel a single session - -In this example, we use the [`SHOW SESSIONS`](show-sessions.html) statement to get the ID of a session and then pass the ID into the `CANCEL SESSION` statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW SESSIONS; -~~~ -~~~ -+---------+----------------------------------+-----------+... -| node_id | session_id | user_name |... -+---------+----------------------------------+-----------+... -| 1 | 1530c309b1d8d5f00000000000000001 | root |... -+---------+----------------------------------+-----------+... -| 1 | 1530fe0e46d2692e0000000000000001 | maxroach |... -+---------+----------------------------------+-----------+... -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CANCEL SESSION '1530fe0e46d2692e0000000000000001'; -~~~ - -You can also cancel a session using a subquery that returns a single session ID: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CANCEL SESSIONS (WITH x AS (SHOW SESSIONS) SELECT session_id FROM x - WHERE user_name = 'root'); -~~~ - -### Cancel multiple sessions - -Use the [`SHOW SESSIONS`](show-sessions.html) statement to view all active sessions: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW SESSIONS; -~~~ -~~~ -+---------+----------------------------------+-----------+... -| node_id | session_id | user_name |... -+---------+----------------------------------+-----------+... -| 1 | 1530c309b1d8d5f00000000000000001 | root |... -+---------+----------------------------------+-----------+... -| 1 | 1530fe0e46d2692e0000000000000001 | maxroach |... -+---------+----------------------------------+-----------+... -| 1 | 15310cc79671fc6a0000000000000001 | maxroach |... -+---------+----------------------------------+-----------+... -~~~ - -To cancel multiple sessions, nest a [`SELECT` clause](select-clause.html) that retrieves `session_id`(s) inside the `CANCEL SESSIONS` statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CANCEL SESSIONS (WITH x AS (SHOW SESSIONS) SELECT session_id FROM x - WHERE user_name = 'maxroach'); -~~~ - -All sessions created by `maxroach` will be cancelled. - -## See also - -- [`SHOW SESSIONS`](show-sessions.html) -- [`SET {session variable}`](set-vars.html) -- [`SHOW {session variable}`](show-vars.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v22.1/change-data-capture-overview.md b/src/current/v22.1/change-data-capture-overview.md deleted file mode 100644 index b9c2233b97a..00000000000 --- a/src/current/v22.1/change-data-capture-overview.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -title: Change Data Capture Overview -summary: Stream data out of CockroachDB with efficient, distributed, row-level change subscriptions (changefeeds). -toc: true -docs_area: stream_data -key: stream-data-out-of-cockroachdb-using-changefeeds.html ---- - -Change data capture (CDC) provides efficient, distributed, row-level changefeeds into a configurable sink for downstream processing such as reporting, caching, or full-text indexing. - -## What is change data capture? - -While CockroachDB is an excellent system of record, it also needs to coexist with other systems. For example, you might want to keep your data mirrored in full-text indexes, analytics engines, or big data pipelines. - -The main feature of CDC is the changefeed, which targets an allowlist of tables, called the "watched rows". There are two implementations of changefeeds: - -| [Core changefeeds](create-and-configure-changefeeds.html?filters=core) | [{{ site.data.products.enterprise }} changefeeds](create-and-configure-changefeeds.html) | ---------------------------------------------------|-----------------------------------------------------------------| -| Useful for prototyping or quick testing. | Recommended for production use. | -| Available in all products. | Available in CockroachDB {{ site.data.products.dedicated }} or with an [{{ site.data.products.enterprise }} license](enterprise-licensing.html) in CockroachDB {{ site.data.products.core }} or CockroachDB {{ site.data.products.serverless }}. | -| Streams indefinitely until underlying SQL connection is closed. | Maintains connection to configured sink. | -| Create with [`EXPERIMENTAL CHANGEFEED FOR`](changefeed-for.html). | Create with [`CREATE CHANGEFEED`](create-changefeed.html). | -| Watches one or multiple tables in a comma-separated list. Emits every change to a "watched" row as a record. | Watches one or multiple tables in a comma-separated list. Emits every change to a "watched" row as a record in a
                              configurable format (`JSON` or Avro) to a [configurable sink](changefeed-sinks.html) (e.g., [Kafka](https://kafka.apache.org/)). | -| [`CREATE`](create-and-configure-changefeeds.html?filters=core) changefeed and cancel by closing the connection. | Manage changefeed with [`CREATE`](create-and-configure-changefeeds.html#create), [`PAUSE`](create-and-configure-changefeeds.html#pause), [`RESUME`](create-and-configure-changefeeds.html#resume), [`ALTER`](alter-changefeed.html), and [`CANCEL`](create-and-configure-changefeeds.html#cancel), as well as [monitor](monitor-and-debug-changefeeds.html#monitor-a-changefeed) and [debug](monitor-and-debug-changefeeds.html#debug-a-changefeed). | - -See [Ordering Guarantees](changefeed-messages.html#ordering-guarantees) for detail on CockroachDB's at-least-once-delivery-guarantee as well as explanation on how rows are emitted. - -## How does an Enterprise changefeed work? - -When an {{ site.data.products.enterprise }} changefeed is started on a node, that node becomes the _coordinator_ for the changefeed job (**Node 2** in the diagram). The coordinator node acts as an administrator: keeping track of all other nodes during job execution and the changefeed work as it completes. The changefeed job will run across all nodes in the cluster to access changed data in the watched table. Typically, the [leaseholder](architecture/replication-layer.html#leases) for a particular range (or the range’s replica) determines which node emits the changefeed data. - -Each node uses its aggregator processors to send back checkpoint progress to the coordinator, which gathers this information to update the high-water mark timestamp. The high-water mark acts as a checkpoint for the changefeed’s job progress, and guarantees that all changes before (or at) the timestamp have been emitted. In the unlikely event that the changefeed’s coordinating node were to fail during the job, that role will move to a different node and the changefeed will restart from the last checkpoint. If restarted, the changefeed will send duplicate messages starting at the high-water mark time to the current time. See [Ordering Guarantees](changefeed-messages.html#ordering-guarantees) for detail on CockroachDB's at-least-once-delivery-guarantee as well as an explanation on how rows are emitted. - -Changefeed process in a 3-node cluster - -With [`resolved`](create-changefeed.html#resolved-option) specified when a changefeed is started, the coordinator will send the resolved timestamp (i.e., the high-water mark) to each endpoint in the sink. For example, when using [Kafka](changefeed-sinks.html#kafka) this will be sent as a message to each partition; for [cloud storage](changefeed-sinks.html#cloud-storage-sink), this will be emitted as a resolved timestamp file. - -As rows are updated, added, and deleted in the targeted table(s), the node sends the row changes through the [rangefeed mechanism](create-and-configure-changefeeds.html#enable-rangefeeds) to the changefeed encoder, which encodes these changes into the [final message format](changefeed-messages.html#responses). The message is emitted from the encoder to the sink—it can emit to any endpoint in the sink. In the diagram example, this means that the messages can emit to any Kafka Broker. - -See the following for more detail on changefeed setup and use: - -- [Enable rangefeeds](create-and-configure-changefeeds.html#enable-rangefeeds) -- [Changefeed Sinks](changefeed-sinks.html) -- [Changefeed Examples](changefeed-examples.html) - -## Known limitations - -{% include {{ page.version.version }}/known-limitations/cdc.md %} diff --git a/src/current/v22.1/changefeed-examples.md b/src/current/v22.1/changefeed-examples.md deleted file mode 100644 index 6f5d0011dd5..00000000000 --- a/src/current/v22.1/changefeed-examples.md +++ /dev/null @@ -1,642 +0,0 @@ ---- -title: Changefeed Examples -summary: Examples for starting and using changefeeds with different aims. -toc: true -docs_area: stream_data ---- - -This page provides step-by-step examples for using Core and {{ site.data.products.enterprise }} changefeeds. Creating {{ site.data.products.enterprise }} changefeeds is available on CockroachDB {{ site.data.products.dedicated }}, on CockroachDB {{ site.data.products.serverless }} clusters with an [{{ site.data.products.enterprise }} license](enterprise-licensing.html), and on CockroachDB {{ site.data.products.core }} clusters with an [{{ site.data.products.enterprise }} license](enterprise-licensing.html). Core changefeeds are available in all products. - -For a summary of Core and {{ site.data.products.enterprise }} changefeed features, see [What is Change Data Capture?](change-data-capture-overview.html#what-is-change-data-capture) - -{{ site.data.products.enterprise }} changefeeds can connect to the following sinks: - -- [Kafka](#create-a-changefeed-connected-to-kafka) -- [Google Cloud Pub/Sub](#create-a-changefeed-connected-to-a-google-cloud-pub-sub-sink) -- [Cloud Storage](#create-a-changefeed-connected-to-a-cloud-storage-sink) (Amazon S3, Google Cloud Storage, Azure Storage) -- [Webhook](#create-a-changefeed-connected-to-a-webhook-sink) - -See the [Changefeed Sinks](changefeed-sinks.html) page for more detail on forming sink URIs, available sink query parameters, and specifics on configuration. - -Use the following filters to show usage examples for either **Enterprise** or **Core** changefeeds: - -
                              - - -
                              - -
                              - -## Create a changefeed connected to Kafka - -{{site.data.alerts.callout_info}} -[`CREATE CHANGEFEED`](create-changefeed.html) is an [{{ site.data.products.enterprise }}-only](enterprise-licensing.html) feature. For the Core version, see [the `CHANGEFEED FOR` example](#create-a-core-changefeed). -{{site.data.alerts.end}} - -In this example, you'll set up a changefeed for a single-node cluster that is connected to a Kafka sink. The changefeed will watch two tables. - -1. If you do not already have one, [request a trial {{ site.data.products.enterprise }} license](enterprise-licensing.html). - -2. Use the [`cockroach start-single-node`](cockroach-start-single-node.html) command to start a single-node cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach start-single-node --insecure --listen-addr=localhost --background - ~~~ - -1. Download and extract the [Confluent Open Source platform](https://www.confluent.io/download/) (which includes Kafka). - -1. Move into the extracted `confluent-` directory and start Confluent: - - {% include_cached copy-clipboard.html %} - ~~~ shell - ./bin/confluent local services start - ~~~ - - Only `zookeeper` and `kafka` are needed. To troubleshoot Confluent, see [their docs](https://docs.confluent.io/current/installation/installing_cp.html#zip-and-tar-archives) and the [Quick Start Guide](https://docs.confluent.io/platform/current/quickstart/ce-quickstart.html#ce-quickstart). - -1. Create two Kafka topics: - - {% include_cached copy-clipboard.html %} - ~~~ shell - ./bin/kafka-topics \ - --create \ - --zookeeper localhost:2181 \ - --replication-factor 1 \ - --partitions 1 \ - --topic office_dogs - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - ./bin/kafka-topics \ - --create \ - --zookeeper localhost:2181 \ - --replication-factor 1 \ - --partitions 1 \ - --topic employees - ~~~ - - {{site.data.alerts.callout_info}} - You are expected to create any Kafka topics with the necessary number of replications and partitions. [Topics can be created manually](https://kafka.apache.org/documentation/#basic_ops_add_topic) or [Kafka brokers can be configured to automatically create topics](https://kafka.apache.org/documentation/#topicconfigs) with a default partition count and replication factor. - {{site.data.alerts.end}} - -{% include {{ page.version.version }}/cdc/sql-cluster-settings-example.md %} - -{% include {{ page.version.version }}/cdc/create-example-db-cdc.md %} - -1. Start the changefeed: - - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE CHANGEFEED FOR TABLE office_dogs, employees INTO 'kafka://localhost:9092'; - ~~~ - ~~~ - job_id - +--------------------+ - 360645287206223873 - (1 row) - - NOTICE: changefeed will emit to topic office_dogs - NOTICE: changefeed will emit to topic employees - ~~~ - - This will start up the changefeed in the background and return the `job_id`. The changefeed writes to Kafka. - -1. In a new terminal, move into the extracted `confluent-` directory and start watching the Kafka topics: - - {% include_cached copy-clipboard.html %} - ~~~ shell - ./bin/kafka-console-consumer \ - --bootstrap-server=localhost:9092 \ - --from-beginning \ - --include 'office_dogs|employees' - ~~~ - - ~~~ - {"after": {"id": 1, "name": "Petee H"}} - {"after": {"id": 2, "name": "Carl"}} - {"after": {"id": 1, "name": "Lauren", "rowid": 528514320239329281}} - {"after": {"id": 2, "name": "Spencer", "rowid": 528514320239362049}} - ~~~ - - The initial scan displays the state of the tables as of when the changefeed started (therefore, the initial value of `"Petee"` is omitted). - - {% include {{ page.version.version }}/cdc/print-key.md %} - -1. Back in the SQL client, insert more data: - - {% include_cached copy-clipboard.html %} - ~~~ sql - INSERT INTO office_dogs VALUES (3, 'Ernie'); - ~~~ - -1. Back in the terminal where you're watching the Kafka topics, the following output has appeared: - - ~~~ - {"after": {"id": 3, "name": "Ernie"}} - ~~~ - -1. When you are done, exit the SQL shell (`\q`). - -1. To stop `cockroach`: - - Get the process ID of the node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - ps -ef | grep cockroach | grep -v grep - ~~~ - - ~~~ - 501 21766 1 0 6:21PM ttys001 0:00.89 cockroach start-single-node --insecure --listen-addr=localhost - ~~~ - - Gracefully shut down the node, specifying its process ID: - - {% include_cached copy-clipboard.html %} - ~~~ shell - kill -TERM 21766 - ~~~ - - ~~~ - initiating graceful shutdown of server - server drained and shutdown completed - ~~~ - -1. To stop Kafka, move into the extracted `confluent-` directory and stop Confluent: - - {% include_cached copy-clipboard.html %} - ~~~ shell - ./bin/confluent local services stop - ~~~ - -## Create a changefeed connected to Kafka using Avro - -{{site.data.alerts.callout_info}} -[`CREATE CHANGEFEED`](create-changefeed.html) is an [{{ site.data.products.enterprise }}-only](enterprise-licensing.html) feature. For the Core version, see [the `CHANGEFEED FOR` example](#create-a-core-changefeed-using-avro). -{{site.data.alerts.end}} - -In this example, you'll set up a changefeed for a single-node cluster that is connected to a Kafka sink and emits [Avro](https://avro.apache.org/docs/1.8.2/spec.html) records. The changefeed will watch two tables. - -1. If you do not already have one, [request a trial {{ site.data.products.enterprise }} license](enterprise-licensing.html). - -1. Use the [`cockroach start-single-node`](cockroach-start-single-node.html) command to start a single-node cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach start-single-node --insecure --listen-addr=localhost --background - ~~~ - -1. Download and extract the [Confluent Open Source platform](https://www.confluent.io/download/) (which includes Kafka). - -1. Move into the extracted `confluent-` directory and start Confluent: - - {% include_cached copy-clipboard.html %} - ~~~ shell - ./bin/confluent local services start - ~~~ - - Only `zookeeper`, `kafka`, and `schema-registry` are needed. To troubleshoot Confluent, see [their docs](https://docs.confluent.io/current/installation/installing_cp.html#zip-and-tar-archives) and the [Quick Start Guide](https://docs.confluent.io/platform/current/quickstart/ce-quickstart.html#ce-quickstart). - -1. Create two Kafka topics: - - {% include_cached copy-clipboard.html %} - ~~~ shell - ./bin/kafka-topics \ - --create \ - --zookeeper localhost:2181 \ - --replication-factor 1 \ - --partitions 1 \ - --topic office_dogs - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - ./bin/kafka-topics \ - --create \ - --zookeeper localhost:2181 \ - --replication-factor 1 \ - --partitions 1 \ - --topic employees - ~~~ - - {{site.data.alerts.callout_info}} - You are expected to create any Kafka topics with the necessary number of replications and partitions. [Topics can be created manually](https://kafka.apache.org/documentation/#basic_ops_add_topic) or [Kafka brokers can be configured to automatically create topics](https://kafka.apache.org/documentation/#topicconfigs) with a default partition count and replication factor. - {{site.data.alerts.end}} - -{% include {{ page.version.version }}/cdc/sql-cluster-settings-example.md %} - -{% include {{ page.version.version }}/cdc/create-example-db-cdc.md %} - -1. Start the changefeed: - - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE CHANGEFEED FOR TABLE office_dogs, employees INTO 'kafka://localhost:9092' WITH format = avro, confluent_schema_registry = 'http://localhost:8081'; - ~~~ - - {% include {{ page.version.version }}/cdc/confluent-cloud-sr-url.md %} - - ~~~ - job_id - +--------------------+ - 360645287206223873 - (1 row) - - NOTICE: changefeed will emit to topic office_dogs - NOTICE: changefeed will emit to topic employees - ~~~ - - This will start up the changefeed in the background and return the `job_id`. The changefeed writes to Kafka. - -1. In a new terminal, move into the extracted `confluent-` directory and start watching the Kafka topics: - - {% include_cached copy-clipboard.html %} - ~~~ shell - ./bin/kafka-avro-console-consumer \ - --bootstrap-server=localhost:9092 \ - --from-beginning \ - --include 'office_dogs|employees' - ~~~ - - ~~~ - {"after":{"office_dogs":{"id":{"long":1},"name":{"string":"Petee H"}}}} - {"after":{"office_dogs":{"id":{"long":2},"name":{"string":"Carl"}}}} - {"after":{"employees":{"dog_id":{"long":1},"employee_name":{"string":"Lauren"},"rowid":{"long":528537452042682369}}}} - {"after":{"employees":{"dog_id":{"long":2},"employee_name":{"string":"Spencer"},"rowid":{"long":528537452042747905}}}} - ~~~ - - The initial scan displays the state of the table as of when the changefeed started (therefore, the initial value of `"Petee"` is omitted). - - {% include {{ page.version.version }}/cdc/print-key.md %} - -1. Back in the SQL client, insert more data: - - {% include_cached copy-clipboard.html %} - ~~~ sql - INSERT INTO office_dogs VALUES (3, 'Ernie'); - ~~~ - -1. Back in the terminal where you're watching the Kafka topics, the following output has appeared: - - ~~~ - {"after":{"office_dogs":{"id":{"long":3},"name":{"string":"Ernie"}}}} - ~~~ - -1. When you are done, exit the SQL shell (`\q`). - -1. To stop `cockroach`: - - Get the process ID of the node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - ps -ef | grep cockroach | grep -v grep - ~~~ - - ~~~ - 501 21766 1 0 6:21PM ttys001 0:00.89 cockroach start-single-node --insecure --listen-addr=localhost - ~~~ - - Gracefully shut down the node, specifying its process ID: - - {% include_cached copy-clipboard.html %} - ~~~ shell - kill -TERM 21766 - ~~~ - - ~~~ - initiating graceful shutdown of server - server drained and shutdown completed - ~~~ - -1. To stop Kafka, move into the extracted `confluent-` directory and stop Confluent: - - {% include_cached copy-clipboard.html %} - ~~~ shell - ./bin/confluent local services stop - ~~~ - -## Create a changefeed connected to a Google Cloud Pub/Sub sink - -{{site.data.alerts.callout_info}} -{% include feature-phases/preview.md %} -{{site.data.alerts.end}} - -{% include_cached new-in.html version="v22.1" %} In this example, you'll set up a changefeed for a single-node cluster that is connected to a [Google Cloud Pub/Sub](https://cloud.google.com/pubsub/docs/overview) sink. The changefeed will watch a table and send messages to the sink. - -You'll need access to a [Google Cloud Project](https://cloud.google.com/resource-manager/docs/creating-managing-projects) to set up a Pub/Sub sink. In this example, the [Google Cloud CLI](https://cloud.google.com/sdk/docs/install-sdk) (`gcloud`) is used, but you can also complete each of these steps within your [Google Cloud Console](https://cloud.google.com/storage/docs/cloud-console). - -1. If you do not already have one, [request a trial {{ site.data.products.enterprise }} license](enterprise-licensing.html). - -1. Use the [`cockroach start-single-node`](cockroach-start-single-node.html) command to start a single-node cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach start-single-node --insecure --listen-addr=localhost --background - ~~~ - -1. In this example, you'll run CockroachDB's [Movr](movr.html) application workload to set up some data for your changefeed. - - First create the schema for the workload: - - {% include_cached copy-clipboard.html %} - ~~~shell - cockroach workload init movr "postgresql://root@127.0.0.1:26257?sslmode=disable" - ~~~ - - Then run the workload: - - {% include_cached copy-clipboard.html %} - ~~~shell - cockroach workload run movr --duration=1m "postgresql://root@127.0.0.1:26257?sslmode=disable" - ~~~ - -{% include {{ page.version.version }}/cdc/sql-cluster-settings-example.md %} - -1. Next, you'll prepare your Pub/Sub sink. - - In a new terminal window, create a [Service Account](https://cloud.google.com/iam/docs/understanding-service-accounts) attached to your Google Project: - - {% include_cached copy-clipboard.html %} - ~~~ shell - gcloud iam service-accounts create cdc-demo --project cockroach-project - ~~~ - - In this example, `cdc-demo` will represent the name of the service account, and `cockroach-project` is the name of the Google Project. - - To ensure that your Service Account has the correct permissions to publish to the sink, use the following command to give the Service Account the predefined [Pub/Sub Editor](https://cloud.google.com/iam/docs/understanding-roles#pub-sub-roles) role: - - {% include_cached copy-clipboard.html %} - ~~~ shell - gcloud projects add-iam-policy-binding cockroach-project --member='serviceAccount:cdc-demo@cockroach-project.iam.gserviceaccount.com' --role='roles/pubsub.editor' - ~~~ - -1. Create the Pub/Sub [topic](changefeed-sinks.html#pub-sub-topic-naming) to which your changefeed will emit messages: - - {% include_cached copy-clipboard.html %} - ~~~ shell - gcloud pubsub topics create movr-users --project cockroach-project - ~~~ - - Run the following command to create a subscription within the `movr-users` topic: - - {% include_cached copy-clipboard.html %} - ~~~ shell - gcloud pubsub subscriptions create movr-users-sub --topic=movr-users --topic-project=cockroach-project - ~~~ - -1. With the topic and subscription set up, you can now download your Service Account's key. Use the following command to specify where to download the json key file (`key.json`): - - {% include_cached copy-clipboard.html %} - ~~~ shell - gcloud iam service-accounts keys create key.json --iam-account=cdc-demo@cockroach-project.iam.gserviceaccount.com - ~~~ - - Next, base64 encode your credentials key using the command specific to your platform. - - If you're working on macOS: - - {% include_cached copy-clipboard.html %} - ~~~ shell - cat key.json | base64 - ~~~ - - If you're working on Linux, run the following to ensure that lines are not wrapped in the output: - - {% include_cached copy-clipboard.html %} - ~~~ shell - cat key.json | base64 -w 0 - ~~~ - - Copy the output so that you can add it to your [`CREATE CHANGEFEED`](create-changefeed.html) statement in the next step. When you create your changefeed, it is necessary that the key is base64 encoded before passing it in the URI. - -1. Back in the SQL shell, create a changefeed that will emit messages to your Pub/Sub topic. Ensure that you pass the base64-encoded credentials for your Service Account and add your topic's region: - - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE CHANGEFEED FOR TABLE users INTO 'gcpubsub://cockroach-project?region=us-east1&topic_name=movr-users&AUTH=specified&CREDENTIALS={base64-encoded key}'; - ~~~ - - The output will confirm the topic where the changefeed will emit messages to. - - ~~~ - job_id - ---------------------- - 756641304964792321 - (1 row) - - NOTICE: changefeed will emit to topic movr-users - ~~~ - - To view all the messages delivered to your topic, you can use the Cloud Console. You'll see the messages emitted to the `movr-users-sub` subscription. - - Google Cloud Console changefeed message output from movr database - - To view published messages from your terminal, run the following command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - gcloud pubsub subscriptions pull movr-users-sub --auto-ack --limit=10 - ~~~ - - This command will **only** pull these messages once per subscription. For example, if you ran this command again you would receive 10 different messages in your output. To receive more than one message at a time, pass the `--limit` flag. For more details, see the [gcloud pubsub subscriptions pull](https://cloud.google.com/sdk/gcloud/reference/pubsub/subscriptions/pull) documentation. - - ~~~ - ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────┬─────────────────────────────────────────────────────────┬────────────┬──────────────────┐ - │ DATA │ MESSAGE_ID │ ORDERING_KEY │ ATTRIBUTES │ DELIVERY_ATTEMPT │ - ├──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────┼─────────────────────────────────────────────────────────┼────────────┼──────────────────┤ - │ {"key":["boston","40ef7cfa-5e16-4bd3-9e14-2f23407a66df"],"value":{"after":{"address":"14980 Gentry Plains Apt. 64","city":"boston","credit_card":"2466765790","id":"40ef7cfa-5e16-4bd3-9e14-2f23407a66df","name":"Vickie Fitzpatrick"}},"topic":"movr-users"} │ 4466153049158588 │ ["boston", "40ef7cfa-5e16-4bd3-9e14-2f23407a66df"] │ │ │ - │ {"key":["los angeles","947ae147-ae14-4800-8000-00000000001d"],"value":{"after":{"address":"35627 Chelsey Tunnel Suite 94","city":"los angeles","credit_card":"2099932769","id":"947ae147-ae14-4800-8000-00000000001d","name":"Kenneth Barnes"}},"topic":"movr-users"} │ 4466144577818136 │ ["los angeles", "947ae147-ae14-4800-8000-00000000001d"] │ │ │ - │ {"key":["amsterdam","c28f5c28-f5c2-4000-8000-000000000026"],"value":{"after":{"address":"14729 Karen Radial","city":"amsterdam","credit_card":"5844236997","id":"c28f5c28-f5c2-4000-8000-000000000026","name":"Maria Weber"}},"topic":"movr-users"} │ 4466151194002912 │ ["amsterdam", "c28f5c28-f5c2-4000-8000-000000000026"] │ │ │ - │ {"key":["new york","6c8ab772-584a-439d-b7b4-fda37767c74c"],"value":{"after":{"address":"34196 Roger Row Suite 6","city":"new york","credit_card":"3117945420","id":"6c8ab772-584a-439d-b7b4-fda37767c74c","name":"James Lang"}},"topic":"movr-users"} │ 4466147099992681 │ ["new york", "6c8ab772-584a-439d-b7b4-fda37767c74c"] │ │ │ - │ {"key":["boston","c56dab0a-63e7-4fbb-a9af-54362c481c41"],"value":{"after":{"address":"83781 Ross Overpass","city":"boston","credit_card":"7044597874","id":"c56dab0a-63e7-4fbb-a9af-54362c481c41","name":"Mark Butler"}},"topic":"movr-users"} │ 4466150752442731 │ ["boston", "c56dab0a-63e7-4fbb-a9af-54362c481c41"] │ │ │ - │ {"key":["amsterdam","f27e09d5-d7cd-4f88-8b65-abb910036f45"],"value":{"after":{"address":"77153 Donald Road Apt. 62","city":"amsterdam","credit_card":"7531160744","id":"f27e09d5-d7cd-4f88-8b65-abb910036f45","name":"Lisa Sandoval"}},"topic":"movr-users"} │ 4466147182359256 │ ["amsterdam", "f27e09d5-d7cd-4f88-8b65-abb910036f45"] │ │ │ - │ {"key":["new york","46d200c0-6924-4cc7-b3c9-3398997acb84"],"value":{"after":{"address":"92843 Carlos Grove","city":"new york","credit_card":"8822366402","id":"46d200c0-6924-4cc7-b3c9-3398997acb84","name":"Mackenzie Malone"}},"topic":"movr-users"} │ 4466142864542016 │ ["new york", "46d200c0-6924-4cc7-b3c9-3398997acb84"] │ │ │ - │ {"key":["boston","52ecbb26-0eab-4e0b-a160-90caa6a7d350"],"value":{"after":{"address":"95044 Eric Corner Suite 33","city":"boston","credit_card":"3982363300","id":"52ecbb26-0eab-4e0b-a160-90caa6a7d350","name":"Brett Porter"}},"topic":"movr-users"} │ 4466152539161631 │ ["boston", "52ecbb26-0eab-4e0b-a160-90caa6a7d350"] │ │ │ - │ {"key":["amsterdam","ae147ae1-47ae-4800-8000-000000000022"],"value":{"after":{"address":"88194 Angela Gardens Suite 94","city":"amsterdam","credit_card":"4443538758","id":"ae147ae1-47ae-4800-8000-000000000022","name":"Tyler Dalton"}},"topic":"movr-users"} │ 4466151398997150 │ ["amsterdam", "ae147ae1-47ae-4800-8000-000000000022"] │ │ │ - │ {"key":["paris","dc28f5c2-8f5c-4800-8000-00000000002b"],"value":{"after":{"address":"2058 Rodriguez Stream","city":"paris","credit_card":"9584502537","id":"dc28f5c2-8f5c-4800-8000-00000000002b","name":"Tony Ortiz"}},"topic":"movr-users"} │ 4466146372222914 │ ["paris", "dc28f5c2-8f5c-4800-8000-00000000002b"] │ │ │ - └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────┴─────────────────────────────────────────────────────────┴────────────┴──────────────────┘ - ~~~ - -## Create a changefeed connected to a cloud storage sink - -{{site.data.alerts.callout_info}} -[`CREATE CHANGEFEED`](create-changefeed.html) is an [{{ site.data.products.enterprise }}-only](enterprise-licensing.html) feature. For the Core version, see [the `CHANGEFEED FOR` example](#create-a-core-changefeed). -{{site.data.alerts.end}} - -In this example, you'll set up a changefeed for a single-node cluster that is connected to an AWS S3 sink. The changefeed watches two tables. Note that you can set up changefeeds for any of [these cloud storage providers](changefeed-sinks.html#cloud-storage-sink). - -1. If you do not already have one, [request a trial {{ site.data.products.enterprise }} license](enterprise-licensing.html). - -1. Use the [`cockroach start-single-node`](cockroach-start-single-node.html) command to start a single-node cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start-single-node --insecure --listen-addr=localhost --background - ~~~ - -{% include {{ page.version.version }}/cdc/sql-cluster-settings-example.md %} - -{% include {{ page.version.version }}/cdc/create-example-db-cdc.md %} - -1. Start the changefeed: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE CHANGEFEED FOR TABLE office_dogs, employees INTO 's3://example-bucket-name/test?AWS_ACCESS_KEY_ID=enter_key-here&AWS_SECRET_ACCESS_KEY=enter_key_here' with updated, resolved='10s'; - ~~~ - - ~~~ - job_id - +--------------------+ - 360645287206223873 - (1 row) - ~~~ - - This will start up the changefeed in the background and return the `job_id`. The changefeed writes to AWS. - -1. Monitor your changefeed on the DB Console. For more information, see [Changefeeds Dashboard](ui-cdc-dashboard.html). - -1. When you are done, exit the SQL shell (`\q`). - -1. To stop `cockroach`: - - Get the process ID of the node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - ps -ef | grep cockroach | grep -v grep - ~~~ - - ~~~ - 501 21766 1 0 6:21PM ttys001 0:00.89 cockroach start-single-node --insecure --listen-addr=localhost - ~~~ - - Gracefully shut down the node, specifying its process ID: - - {% include_cached copy-clipboard.html %} - ~~~ shell - kill -TERM 21766 - ~~~ - - ~~~ - initiating graceful shutdown of server - server drained and shutdown completed - ~~~ - -## Create a changefeed connected to a webhook sink - -{{site.data.alerts.callout_info}} -[`CREATE CHANGEFEED`](create-changefeed.html) is an [{{ site.data.products.enterprise }}-only](enterprise-licensing.html) feature. For the Core version, see [the `CHANGEFEED FOR` example](#create-a-core-changefeed). -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -{% include feature-phases/preview.md %} -{{site.data.alerts.end}} - -In this example, you'll set up a changefeed for a single-node cluster that is connected to a local HTTP server via a webhook. For this example, you'll use an [example HTTP server](https://github.com/cockroachlabs/cdc-webhook-sink-test-server/tree/master/go-https-server) to test out the webhook sink. - -1. If you do not already have one, [request a trial {{ site.data.products.enterprise }} license](enterprise-licensing.html). - -1. Use the [`cockroach start-single-node`](cockroach-start-single-node.html) command to start a single-node cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start-single-node --insecure --listen-addr=localhost --background - ~~~ - -1. In this example, you'll run CockroachDB's [Movr](movr.html) application workload to set up some data for your changefeed. - - First create the schema for the workload: - - {% include_cached copy-clipboard.html %} - ~~~shell - cockroach workload init movr "postgresql://root@127.0.0.1:26257?sslmode=disable" - ~~~ - - Then run the workload: - - {% include_cached copy-clipboard.html %} - ~~~shell - cockroach workload run movr --duration=1m "postgresql://root@127.0.0.1:26257?sslmode=disable" - ~~~ - -{% include {{ page.version.version }}/cdc/sql-cluster-settings-example.md %} - -1. In a separate terminal window, set up your HTTP server. Clone the test repository: - - {% include_cached copy-clipboard.html %} - ~~~shell - git clone https://github.com/cockroachlabs/cdc-webhook-sink-test-server.git - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~shell - cd cdc-webhook-sink-test-server/go-https-server - ~~~ - -1. Next make the script executable and then run the server (passing a specific port if preferred, otherwise it will default to `:3000`): - - {% include_cached copy-clipboard.html %} - ~~~shell - chmod +x ./server.sh - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~shell - ./server.sh - ~~~ - -1. Back in your SQL shell, run the following statement to create a changefeed that emits to your webhook sink: - - {% include_cached copy-clipboard.html %} - ~~~sql - CREATE CHANGEFEED FOR TABLE movr.vehicles INTO 'webhook-https://localhost:3000?insecure_tls_skip_verify=true' WITH updated; - ~~~ - - You set up a changefeed on the `vehicles` table, which emits changefeed messages to the local HTTP server. - - See the [options table](create-changefeed.html#options) for more information on the options available for creating your changefeed to a webhook sink. - - ~~~ - job_id - ---------------------- - 687842491801632769 - (1 row) - ~~~ - - In the terminal where your HTTP server is running, you'll receive output similar to: - - ~~~ - 2021/08/24 14:00:21 {"payload":[{"after":{"city":"rome","creation_time":"2019-01-02T03:04:05","current_location":"39141 Travis Curve Suite 87","ext":{"brand":"Schwinn","color":"red"},"id":"d7b18299-c0c4-4304-9ef7-05ae46fd5ee1","dog_owner_id":"5d0c85b5-8866-47cf-a6bc-d032f198e48f","status":"in_use","type":"bike"},"key":["rome","d7b18299-c0c4-4304-9ef7-05ae46fd5ee1"],"topic":"vehicles","updated":"1629813621680097993.0000000000"}],"length":1} - 2021/08/24 14:00:22 {"payload":[{"after":{"city":"san francisco","creation_time":"2019-01-02T03:04:05","current_location":"84888 Wallace Wall","ext":{"color":"black"},"id":"020cf7f4-6324-48a0-9f74-6c9010fb1ab4","dog_owner_id":"b74ea421-fcaf-4d80-9dcc-d222d49bdc17","status":"available","type":"scooter"},"key":["san francisco","020cf7f4-6324-48a0-9f74-6c9010fb1ab4"],"topic":"vehicles","updated":"1629813621680097993.0000000000"}],"length":1} - 2021/08/24 14:00:22 {"payload":[{"after":{"city":"san francisco","creation_time":"2019-01-02T03:04:05","current_location":"3893 Dunn Fall Apt. 11","ext":{"color":"black"},"id":"21b2ec54-81ad-4af7-a76d-6087b9c7f0f8","dog_owner_id":"8924c3af-ea6e-4e7e-b2c8-2e318f973393","status":"lost","type":"scooter"},"key":["san francisco","21b2ec54-81ad-4af7-a76d-6087b9c7f0f8"],"topic":"vehicles","updated":"1629813621680097993.0000000000"}],"length":1} - ~~~ - - For more detail on emitted changefeed messages, see [responses](changefeed-messages.html#responses). - -
                              - -
                              - -Core changefeeds stream row-level changes to a client until the underlying SQL connection is closed. - -## Create a Core changefeed - -{% include {{ page.version.version }}/cdc/create-core-changefeed.md %} - -## Create a Core changefeed using Avro - -{% include {{ page.version.version }}/cdc/create-core-changefeed-avro.md %} - -For further information on Core changefeeds, see [`EXPERIMENTAL CHANGEFEED FOR`](changefeed-for.html). - -
                              - -## See also - -- [`EXPERIMENTAL CHANGEFEED FOR`](changefeed-for.html) -- [`CREATE CHANGEFEED`](create-changefeed.html) -- [Changefeed Messages](changefeed-messages.html) diff --git a/src/current/v22.1/changefeed-for.md b/src/current/v22.1/changefeed-for.md deleted file mode 100644 index 5edb9d60243..00000000000 --- a/src/current/v22.1/changefeed-for.md +++ /dev/null @@ -1,131 +0,0 @@ ---- -title: EXPERIMENTAL CHANGEFEED FOR -summary: which streams row-level changes to the client indefinitely until the underlying connection is closed or the changefeed is canceled. -toc: true -docs_area: reference.sql ---- - -{{site.data.alerts.callout_info}} -`EXPERIMENTAL CHANGEFEED FOR` is the core implementation of changefeeds. For the [Enterprise-only](enterprise-licensing.html) version, see [`CREATE CHANGEFEED`](create-changefeed.html). -{{site.data.alerts.end}} - -The `EXPERIMENTAL CHANGEFEED FOR` [statement](sql-statements.html) creates a new core changefeed, which streams row-level changes to the client indefinitely until the underlying connection is closed or the changefeed is canceled. A core changefeed can watch one table or multiple tables in a comma-separated list. - -For more information, see [Change Data Capture Overview](change-data-capture-overview.html). - -{{site.data.alerts.callout_info}} -{% include feature-phases/preview.md %} -{{site.data.alerts.end}} - -## Required privileges - -Changefeeds can only be created by superusers, i.e., [members of the `admin` role](security-reference/authorization.html#admin-role). The admin role exists by default with `root` as the member. - -## Considerations - -- Because core changefeeds return results differently than other SQL statements, they require a dedicated database connection with specific settings around result buffering. In normal operation, CockroachDB improves performance by buffering results server-side before returning them to a client; however, result buffering is automatically turned off for core changefeeds. Core changefeeds also have different cancellation behavior than other queries: they can only be canceled by closing the underlying connection or issuing a [`CANCEL QUERY`](cancel-query.html) statement on a separate connection. Combined, these attributes of changefeeds mean that applications should explicitly create dedicated connections to consume changefeed data, instead of using a connection pool as most client drivers do by default. - - This cancellation behavior (i.e., close the underlying connection to cancel the changefeed) also extends to client driver usage; in particular, when a client driver calls `Rows.Close()` after encountering errors for a stream of rows. The pgwire protocol requires that the rows be consumed before the connection is again usable, but in the case of a core changefeed, the rows are never consumed. It is therefore critical that you close the connection, otherwise the application will be blocked forever on `Rows.Close()`. - -- In most cases, each version of a row will be emitted once. However, some infrequent conditions (e.g., node failures, network partitions) will cause them to be repeated. This gives our changefeeds an at-least-once delivery guarantee. For more information, see [Ordering Guarantees](changefeed-messages.html#ordering-guarantees). -- As of v22.1, changefeeds filter out [`VIRTUAL` computed columns](computed-columns.html) from events by default. This is a [backward-incompatible change](../releases/v22.1.html#v22-1-0-backward-incompatible-changes). To maintain the changefeed behavior in previous versions where [`NULL`](null-handling.html) values are emitted for virtual computed columns, see the [`virtual_columns`](changefeed-for.html#virtual-columns) option for more detail. - -## Synopsis - -~~~ -> EXPERIMENTAL CHANGEFEED FOR table_name [ WITH (option [= value] [, ...]) ]; -~~~ - -## Parameters - -Parameter | Description -----------|------------ -`table_name` | The name of the table (or tables in a comma separated list) to create a changefeed for. -`option` / `value` | For a list of available options and their values, see [Options](#options) below. - - - -### Options - -Option | Value | Description --------|-------|------------ -`confluent_schema_registry` | Schema Registry address | The [Schema Registry](https://docs.confluent.io/current/schema-registry/docs/index.html#sr) address is required to use `avro`. -`cursor` | [Timestamp](as-of-system-time.html#parameters) | Emits any changes after the given timestamp, but does not output the current state of the table first. If `cursor` is not specified, the changefeed starts by doing a consistent scan of all the watched rows and emits the current value, then moves to emitting any changes that happen after the scan.

                              `cursor` can be used to start a new changefeed where a previous changefeed ended.

                              Example: `CURSOR=1536242855577149065.0000000000` -`end_time` | [Timestamp](as-of-system-time.html#parameters) | **New in v22.1:** Indicate the timestamp up to which the changefeed will emit all events and then complete with a `successful` status. Provide a future timestamp to `end_time` in number of nanoseconds since the [Unix epoch](https://en.wikipedia.org/wiki/Unix_time). For example, `end_time="1655402400000000000"`. -`envelope` | `key_only` / `row` / `wrapped` | `key_only` emits only the key and no value, which is faster if you only want to know when the key changes.

                              `row` emits the row without any additional metadata fields in the message. `row` does not support [`avro` format](#format).

                              `wrapped` emits the full message including any metadata fields. See [Responses](changefeed-messages.html#responses) for more detail on message format.

                              Default: `envelope=wrapped` -`format` | `json` / `avro` | Format of the emitted record. Currently, support for [Avro is limited](changefeed-messages.html#avro-limitations).

                              Default: `format=json`. -`initial_scan` / `no_initial_scan` / `initial_scan_only` | N/A | Control whether or not an initial scan will occur at the start time of a changefeed. `initial_scan_only` will perform an initial scan and then the changefeed job will complete with a `successful` status. You cannot use [`end_time`](#end-time) and `initial_scan_only` simultaneously.

                              If none of these options are specified, an initial scan will occur if there is no [`cursor`](#cursor-option), and will not occur if there is one. This preserves the behavior from previous releases.

                              You cannot specify `initial_scan` and `no_initial_scan` or `no_initial_scan and` `initial_scan_only` simultaneously.

                              Default: `initial_scan`
                              If used in conjunction with `cursor`, an initial scan will be performed at the cursor timestamp. If no `cursor` is specified, the initial scan is performed at `now()`. -`min_checkpoint_frequency` | [Duration string](https://pkg.go.dev/time#ParseDuration) | Controls how often nodes flush their progress to the [coordinating changefeed node](change-data-capture-overview.html#how-does-an-enterprise-changefeed-work). Changefeeds will wait for at least the specified duration before a flushing. This can help you control the flush frequency to achieve better throughput. If this is set to `0s`, a node will flush as long as the high-water mark has increased for the ranges that particular node is processing. If a changefeed is resumed, then `min_checkpoint_frequency` is the amount of time that changefeed will need to catch up. That is, it could emit duplicate messages during this time.

                              **Note:** [`resolved`](#resolved-option) messages will not be emitted more frequently than the configured `min_checkpoint_frequency` (but may be emitted less frequently). Since `min_checkpoint_frequency` defaults to `30s`, you **must** configure `min_checkpoint_frequency` to at least the desired `resolved` message frequency if you require `resolved` messages more frequently than `30s`.

                              **Default:** `30s` -`mvcc_timestamp` | N/A | Include the [MVCC](architecture/storage-layer.html#mvcc) timestamp for each emitted row in a changefeed. With the `mvcc_timestamp` option, each emitted row will always contain its MVCC timestamp, even during the changefeed's initial backfill. -`resolved` | [`INTERVAL`](interval.html) | Emits [resolved timestamp](changefeed-messages.html#resolved-def) events for the changefeed. Resolved timestamp events do not emit until all ranges in the changefeed have progressed to a specific point in time.

                              Set an optional minimal duration between emitting resolved timestamps. Example: `resolved='10s'`. This option will **only** emit a resolved timestamp event if the timestamp has advanced and at least the optional duration has elapsed. If unspecified, all resolved timestamps are emitted as the high-water mark advances.

                              **Note:** If you require `resolved` message frequency under `30s`, then you **must** set the [`min_checkpoint_frequency`](#min-checkpoint-frequency) option to at least the desired `resolved` frequency. This is because `resolved` messages will not be emitted more frequently than `min_checkpoint_frequency`, but may be emitted less frequently. -`split_column_families` | N/A | **New in v22.1:** Target a table with multiple columns families. Emit messages for each column family in the target table. Each message will include the label: `table.family`. -`updated` | N/A | Include updated timestamps with each row. -`virtual_columns` | `STRING` | **New in v22.1:** Changefeeds omit [virtual computed columns](computed-columns.html) from emitted [messages](changefeed-messages.html#responses) by default. To maintain the behavior of previous CockroachDB versions where the changefeed would emit [`NULL`](null-handling.html) values for virtual computed columns, set `virtual_columns = "null"` when you start a changefeed.

                              You may also define `virtual_columns = "omitted"`, though this is already the default behavior for v22.1+. If you do not set `"omitted"` on a table with virtual computed columns when you create a changefeed, you will receive a warning that changefeeds will filter out virtual computed values.

                              **Default:** `"omitted"` - -#### Avro limitations - -Below are clarifications for particular SQL types and values for Avro changefeeds: - -{% include {{ page.version.version }}/cdc/avro-limitations.md %} - -## Examples - -### Create a changefeed - -To start a changefeed: - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPERIMENTAL CHANGEFEED FOR cdc_test; -~~~ - -In the terminal where the core changefeed is streaming, the output will appear: - -~~~ -table,key,value -cdc_test,[0],"{""after"": {""a"": 0}}" -~~~ - -For step-by-step guidance on creating a Core changefeed, see the [Changefeed Examples](changefeed-examples.html) page. - -### Create a changefeed with Avro - -To start a changefeed in Avro format: - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPERIMENTAL CHANGEFEED FOR cdc_test WITH format = avro, confluent_schema_registry = 'http://localhost:8081'; -~~~ - -In the terminal where the core changefeed is streaming, the output will appear: - -~~~ -table,key,value -cdc_test,\000\000\000\000\001\002\000,\000\000\000\000\002\002\002\000 -~~~ - -For step-by-step guidance on creating a Core changefeed with Avro, see the [Changefeed Examples](changefeed-examples.html) page. - -### Create a changefeed on a table with column families - -To create a changefeed on a table with column families, use the `FAMILY` keyword for a specific column family: - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPERIMENTAL CHANGEFEED FOR TABLE cdc_test FAMILY f1; -~~~ - -To create a changefeed on a table and output changes for each column family, use the `split_column_families` option: - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPERIMENTAL CHANGEFEED FOR TABLE cdc_test WITH split_column_families; -~~~ - -For step-by-step guidance creating a Core changefeed on a table with multiple column families, see the [Changefeed Examples](changefeed-examples.html) page. - -## See also - -- [Change Data Capture Overview](change-data-capture-overview.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v22.1/changefeed-messages.md b/src/current/v22.1/changefeed-messages.md deleted file mode 100644 index 2ee486e791c..00000000000 --- a/src/current/v22.1/changefeed-messages.md +++ /dev/null @@ -1,249 +0,0 @@ ---- -title: Changefeed Messages -summary: Understand changefeed messages and the configuration options. -toc: true -docs_area: stream_data -key: use-changefeeds.html ---- - -Changefeeds emit messages as changes happen to watched tables. CockroachDB changefeeds have an at-least-once delivery guarantee as well as message ordering guarantees. You can also configure the format of changefeed messages with different [options](create-changefeed.html#options) (e.g., `format=avro`). - -This page describes the format and behavior of changefeed messages. You will find the following information on this page: - -- [Responses](#responses): The general format of changefeed messages. -- [Ordering guarantees](#ordering-guarantees): CockroachDB's guarantees for a changefeed's message ordering. -- [Delete messages](#delete-messages): The format of messages when a row is deleted. -- [Schema changes](#schema-changes): The effect of schema changes on a changefeed. -- [Garbage collection](#garbage-collection-and-changefeeds): How protected timestamps and garbage collection interacts with running changefeeds. -- [Avro](#avro): The limitations and type mapping when creating a changefeed using Avro format. - -## Responses - -By default, changefeed messages emitted to a [sink](changefeed-sinks.html) contain keys and values of the watched table entries that have changed, with messages composed of the following fields: - -- **Key**: An array always composed of the row's `PRIMARY KEY` field(s) (e.g., `[1]` for `JSON` or `{"id":{"long":1}}` for Avro). -- **Value**: - - One of three possible top-level fields: - - `after`, which contains the state of the row after the update (or `null` for `DELETE`s). - - `updated`, which contains the updated timestamp. - - `resolved`, which is emitted for records representing resolved timestamps. These records do not include an `after` value since they only function as checkpoints. - - For [`INSERT`](insert.html) and [`UPDATE`](update.html), the current state of the row inserted or updated. - - For [`DELETE`](delete.html), `null`. - -For example: - -Statement | Response ------------------------------------------------+----------------------------------------------------------------------- -`INSERT INTO office_dogs VALUES (1, 'Petee');` | JSON: `[1] {"after": {"id": 1, "name": "Petee"}}`
                              Avro: `{"id":{"long":1}} {"after":{"office_dogs":{"id":{"long":1},"name":{"string":"Petee"}}}}` -`DELETE FROM office_dogs WHERE name = 'Petee'` | JSON: `[1] {"after": null}`
                              Avro: `{"id":{"long":1}} {"after":null}` - -To limit messages to just the changed key value, use the [`envelope`](create-changefeed.html#options) option set to `key_only`. - -When a changefeed targets a table with multiple column families, the family name is appended to the table name as part of the topic. See [Tables with columns families in changefeeds](changefeeds-on-tables-with-column-families.html#message-format) for guidance. - -For webhook sinks, the response format arrives as a batch of changefeed messages with a `payload` and `length`. Batching is done with a per-key guarantee, which means that messages with the same key are considered for the same batch. Note that batches are only collected for row updates and not [resolved timestamps](create-changefeed.html#resolved-option): - -~~~ -{"payload": [{"after" : {"a" : 1, "b" : "a"}, "key": [1], "topic": "foo"}, {"after": {"a": 1, "b": "b"}, "key": [1], "topic": "foo" }], "length":2} -~~~ - -See [changefeed files](create-changefeed.html#files) for more detail on the file naming format for {{ site.data.products.enterprise }} changefeeds. - -## Ordering guarantees - -- In most cases, each version of a row will be emitted once. However, some infrequent conditions (e.g., node failures, network partitions) will cause them to be repeated. This gives our changefeeds an **at-least-once delivery guarantee**. - -- Once a row has been emitted with some timestamp, no previously unseen versions of that row will be emitted with a lower timestamp. That is, you will never see a _new_ change for that row at an earlier timestamp. - - For example, if you ran the following: - - ~~~ sql - > CREATE TABLE foo (id INT PRIMARY KEY DEFAULT unique_rowid(), name STRING); - > CREATE CHANGEFEED FOR TABLE foo INTO 'kafka://localhost:9092' WITH UPDATED; - > INSERT INTO foo VALUES (1, 'Carl'); - > UPDATE foo SET name = 'Petee' WHERE id = 1; - ~~~ - - You'd expect the changefeed to emit: - - ~~~ shell - [1] {"__crdb__": {"updated": }, "id": 1, "name": "Carl"} - [1] {"__crdb__": {"updated": }, "id": 1, "name": "Petee"} - ~~~ - - It is also possible that the changefeed emits an out of order duplicate of an earlier value that you already saw: - - ~~~ shell - [1] {"__crdb__": {"updated": }, "id": 1, "name": "Carl"} - [1] {"__crdb__": {"updated": }, "id": 1, "name": "Petee"} - [1] {"__crdb__": {"updated": }, "id": 1, "name": "Carl"} - ~~~ - - However, you will **never** see an output like the following (i.e., an out of order row that you've never seen before): - - ~~~ shell - [1] {"__crdb__": {"updated": }, "id": 1, "name": "Petee"} - [1] {"__crdb__": {"updated": }, "id": 1, "name": "Carl"} - ~~~ - -- If a row is modified more than once in the same transaction, only the last change will be emitted. - -- Rows are sharded between Kafka partitions by the row’s [primary key](primary-key.html). - -- The `UPDATED` option adds an "updated" timestamp to each emitted row. You can also use the [`RESOLVED` option](create-changefeed.html#resolved-option) to emit "resolved" timestamp messages to each Kafka partition. A "resolved" timestamp is a guarantee that no (previously unseen) rows with a lower update timestamp will be emitted on that partition. - - For example: - - ~~~ shell - {"__crdb__": {"updated": "1532377312562986715.0000000000"}, "id": 1, "name": "Petee H"} - {"__crdb__": {"updated": "1532377306108205142.0000000000"}, "id": 2, "name": "Carl"} - {"__crdb__": {"updated": "1532377358501715562.0000000000"}, "id": 3, "name": "Ernie"} - {"__crdb__":{"resolved":"1532379887442299001.0000000000"}} - {"__crdb__":{"resolved":"1532379888444290910.0000000000"}} - {"__crdb__":{"resolved":"1532379889448662988.0000000000"}} - ... - {"__crdb__":{"resolved":"1532379922512859361.0000000000"}} - {"__crdb__": {"updated": "1532379923319195777.0000000000"}, "id": 4, "name": "Lucky"} - ~~~ - -- With duplicates removed, an individual row is emitted in the same order as the transactions that updated it. However, this is not true for updates to two different rows, even two rows in the same table. - - To compare two different rows for [happens-before](https://en.wikipedia.org/wiki/Happened-before), compare the "updated" timestamp. This works across anything in the same cluster (e.g., tables, nodes, etc.). - - Resolved timestamp notifications on every Kafka partition can be used to provide strong ordering and global consistency guarantees by buffering records in between timestamp closures. Use the "resolved" timestamp to see every row that changed at a certain time. - - The complexity with timestamps is necessary because CockroachDB supports transactions that can affect any part of the cluster, and it is not possible to horizontally divide the transaction log into independent changefeeds. For more information about this, [read our blog post on CDC](https://www.cockroachlabs.com/blog/change-data-capture/). - -## Delete messages - -Deleting a row will result in a changefeed outputting the primary key of the deleted row and a null value. For example, with default options, deleting the row with primary key `5` will output: - -~~~ shell -[5] {"after": null} -~~~ - -In some unusual situations you may receive a delete message for a row without first seeing an insert message. For example, if an attempt is made to delete a row that does not exist, you may or may not get a delete message because the changefeed behavior is undefined to allow for optimizations at the storage layer. Similarly, if there are multiple writes to a row within a single transaction, only the last one will propagate to a changefeed. This means that creating and deleting a row within the same transaction will never result in an insert message, but may result in a delete message. - -## Schema Changes - -### Avro schema changes - -To ensure that the Avro schemas that CockroachDB publishes will work with the schema compatibility rules used by the Confluent schema registry, CockroachDB emits all fields in Avro as nullable unions. This ensures that Avro and Confluent consider the schemas to be both backward- and forward-compatible, since the Confluent Schema Registry has a different set of rules than Avro for schemas to be backward- and forward-compatible. - -Note that the original CockroachDB column definition is also included in the schema as a doc field, so it's still possible to distinguish between a `NOT NULL` CockroachDB column and a `NULL` CockroachDB column. - -### Schema changes with column backfill - -When schema changes with column backfill (e.g., adding a column with a default, adding a computed column, adding a `NOT NULL` column, dropping a column) are made to watched rows, the changefeed will emit some duplicates during the backfill. When it finishes, CockroachDB outputs all watched rows using the new schema. When using Avro, rows that have been backfilled by a schema change are always re-emitted. - -For an example of a schema change with column backfill, start with the changefeed created in this [Kafka example](changefeed-examples.html#create-a-changefeed-connected-to-kafka): - -~~~ -[1] {"id": 1, "name": "Petee H"} -[2] {"id": 2, "name": "Carl"} -[3] {"id": 3, "name": "Ernie"} -~~~ - -Add a column to the watched table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE office_dogs ADD COLUMN likes_treats BOOL DEFAULT TRUE; -~~~ - -The changefeed emits duplicate records 1, 2, and 3 before outputting the records using the new schema: - -~~~ -[1] {"id": 1, "name": "Petee H"} -[2] {"id": 2, "name": "Carl"} -[3] {"id": 3, "name": "Ernie"} -[1] {"id": 1, "name": "Petee H"} # Duplicate -[2] {"id": 2, "name": "Carl"} # Duplicate -[3] {"id": 3, "name": "Ernie"} # Duplicate -[1] {"id": 1, "likes_treats": true, "name": "Petee H"} -[2] {"id": 2, "likes_treats": true, "name": "Carl"} -[3] {"id": 3, "likes_treats": true, "name": "Ernie"} -~~~ - -When using the [`schema_change_policy = nobackfill` option](create-changefeed.html#schema-policy), the changefeed will still emit duplicate records for the table that is being altered. In the preceding output, the records marked as `# Duplicate` will still emit with this option, but not the new schema records. - -{{site.data.alerts.callout_info}} -{% include {{ page.version.version }}/cdc/virtual-computed-column-cdc.md %} -{{site.data.alerts.end}} - -## Garbage collection and changefeeds - -{% include_cached new-in.html version="v22.1" %} By default, [protected timestamps](architecture/storage-layer.html#protected-timestamps) will protect changefeed data from [garbage collection](architecture/storage-layer.html#garbage-collection) up to the time of the [_checkpoint_](change-data-capture-overview.html#how-does-an-enterprise-changefeed-work). - -Protected timestamps will protect changefeed data from garbage collection in the following scenarios: - -- The downstream [changefeed sink](changefeed-sinks.html) is unavailable. Protected timestamps will protect changes until you either [cancel](cancel-job.html) the changefeed or the sink becomes available once again. -- You [pause](pause-job.html) a changefeed with the [`protect_data_from_gc_on_pause`](create-changefeed.html#protect-pause) option enabled. Protected timestamps will protect changes until you [resume](resume-job.html) the changefeed. - -However, if the changefeed lags too far behind, the protected changes could cause data storage issues. To release the protected timestamps and allow garbage collection to resume, you can cancel the changefeed or [resume](resume-job.html) in the case of a paused changefeed. - -We recommend [monitoring](monitor-and-debug-changefeeds.html) storage and the number of running changefeeds. If a changefeed is not advancing and is retrying, it will (without limit) accumulate garbage while it retries to run. - -When `protect_data_from_gc_on_pause` is **unset**, pausing the changefeed will release the existing protected timestamp record. As a result, you could lose the changes if the changefeed remains paused longer than the [garbage collection](configure-replication-zones.html#gc-ttlseconds) window. - -The only ways for changefeeds to **not** protect data are: - -- You pause the changefeed without `protect_data_from_gc_on_pause` set. -- You cancel the changefeed. -- The changefeed fails without [`on_error=pause`](create-changefeed.html#on-error) set. - -## Avro - -The following sections provide information on Avro usage with CockroachDB changefeeds. Creating a changefeed using Avro is available in Core and {{ site.data.products.enterprise }} changefeeds. - -### Avro limitations - -Below are clarifications for particular SQL types and values for Avro changefeeds: - -{% include {{ page.version.version }}/cdc/avro-limitations.md %} - -### Avro types - -Below is a mapping of CockroachDB types to Avro types: - -CockroachDB Type | Avro Type | Avro Logical Type ------------------+-----------+--------------------- -[`ARRAY`](array.html) | [`ARRAY`](https://avro.apache.org/docs/1.8.1/spec.html#schema_complex) | -[`BIT`](bit.html) | Array of [`LONG`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | -[`BLOB`](bytes.html) | [`BYTES`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | -[`BOOL`](bool.html) | [`BOOLEAN`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | -[`BYTEA`](bytes.html) | [`BYTES`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | -[`BYTES`](bytes.html) | [`BYTES`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | -[`COLLATE`](collate.html) | [`STRING`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | -[`DATE`](date.html) | [`INT`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | [`DATE`](https://avro.apache.org/docs/1.8.1/spec.html#Date) -[`DECIMAL`](decimal.html) | [`STRING`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive), [`BYTES`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | [`DECIMAL`](https://avro.apache.org/docs/1.8.1/spec.html#Decimal) -[`ENUMS`](enum.html) | [`STRING`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | -[`FLOAT`](float.html) | [`DOUBLE`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | -[`INET`](inet.html) | [`STRING`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | -[`INT`](int.html) | [`LONG`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | -[`INTERVAL`](interval.html) | [`STRING`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | -[`JSONB`](jsonb.html) | [`STRING`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | -[`STRING`](string.html) | [`STRING`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | -[`TIME`](time.html) | [`LONG`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | [`TIME-MICROS`](https://avro.apache.org/docs/1.8.1/spec.html#Time+%28microsecond+precision%29) -[`TIMESTAMP`](timestamp.html) | [`LONG`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | [`TIME-MICROS`](https://avro.apache.org/docs/1.8.1/spec.html#Time+%28microsecond+precision%29) -[`TIMESTAMPTZ`](timestamp.html) | [`LONG`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | [`TIME-MICROS`](https://avro.apache.org/docs/1.8.1/spec.html#Time+%28microsecond+precision%29) -[`UUID`](uuid.html) | [`STRING`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | -[`VARBIT`](bit.html)| Array of [`LONG`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | - -{{site.data.alerts.callout_info}} -The `DECIMAL` type is a union between Avro `STRING` and Avro `DECIMAL` types. -{{site.data.alerts.end}} - -## CSV - -You can use the [`format=csv`](create-changefeed.html#format) option to emit CSV format messages from your changefeed. However, there are the following limitations with this option: - -- It **only** works in combination with the [`initial_scan = 'only'`](create-changefeed.html#initial-scan) option. -- It does **not** work when used with the [`diff`](create-changefeed.html#diff-opt) or [`resolved`](create-changefeed.html#resolved-option) options. - -## See also - -- [Online Schema Changes](online-schema-changes.html) -- [Change Data Capture Overview](change-data-capture-overview.html) -- [Create and Configure Changefeeds](create-and-configure-changefeeds.html) - diff --git a/src/current/v22.1/changefeed-sinks.md b/src/current/v22.1/changefeed-sinks.md deleted file mode 100644 index b78d6ac6f9a..00000000000 --- a/src/current/v22.1/changefeed-sinks.md +++ /dev/null @@ -1,484 +0,0 @@ ---- -title: Changefeed Sinks -summary: Define a changefeed sink URI and configure specific sinks. -toc: true -docs_area: stream_data ---- - -{{ site.data.products.enterprise }} changefeeds emit messages to configurable downstream sinks. CockroachDB supports the following sinks: - -- [Kafka](#kafka) -- [Google Cloud Pub/Sub](#google-cloud-pub-sub) -- [Cloud Storage](#cloud-storage-sink) / HTTP -- [Webhook](#webhook-sink) - -See [`CREATE CHANGEFEED`](create-changefeed.html) for more detail on the [query parameters](create-changefeed.html#query-parameters) available when setting up a changefeed. - -For a step-by-step example connecting a changefeed to a sink, see the [Changefeed Examples](changefeed-examples.html) page. - -## Sink URI - -The sink URI follows the basic format of: - -~~~ -'{scheme}://{host}:{port}?{query_parameters}' -~~~ - -URI Component | Description --------------------+------------------------------------------------------------------ -`scheme` | The type of sink: [`kafka`](#kafka), [`gcpubsub`](#google-cloud-pub-sub), any [cloud storage sink](#cloud-storage-sink), or [webhook sink](#webhook-sink). -`host` | The sink's hostname or IP address. -`port` | The sink's port. -`query_parameters` | The sink's [query parameters](create-changefeed.html#query-parameters). - -To set a different sink URI to an existing changefeed, use the [`sink` option](alter-changefeed.html#sink-example) with `ALTER CHANGEFEED`. - -## Kafka - -Example of a Kafka sink URI: - -~~~ -'kafka://broker.address.com:9092?topic_prefix=bar_&tls_enabled=true&ca_cert=LS0tLS1CRUdJTiBDRVJUSUZ&sasl_enabled=true&sasl_user={sasl user}&sasl_password={url-encoded password}&sasl_mechanism=SCRAM-SHA-256' -~~~ - -The following table lists the available parameters for Kafka URIs: - -URI Parameter | Description --------------------+------------------------------------------------------------------ -`topic_name` | The topic name to which messages will be sent. See the following section on [Topic Naming](#topic-naming) for detail on how topics are created. -`topic_prefix` | Adds a prefix to all topic names.

                              For example, `CREATE CHANGEFEED FOR TABLE foo INTO 'kafka://...?topic_prefix=bar_'` would emit rows under the topic `bar_foo` instead of `foo`. -`tls_enabled` | If `true`, enable Transport Layer Security (TLS) on the connection to Kafka. This can be used with a `ca_cert` (see below).

                              **Default:** `false` -`ca_cert` | The base64-encoded `ca_cert` file. Specify `ca_cert` for a Kafka sink.

                              Note: To encode your `ca.cert`, run `base64 -w 0 ca.cert`. -`client_cert` | The base64-encoded Privacy Enhanced Mail (PEM) certificate. This is used with `client_key`. -`client_key` | The base64-encoded private key for the PEM certificate. This is used with `client_cert`.

                              {% include {{ page.version.version }}/cdc/client-key-encryption.md %} -`sasl_enabled` | If `true`, the authentication protocol can be set to SCRAM or PLAIN using the `sasl_mechanism` parameter. You must have `tls_enabled` set to `true` to use SASL.

                              **Default:** `false` -`sasl_mechanism` | Can be set to [`SCRAM-SHA-256`](https://docs.confluent.io/platform/current/kafka/authentication_sasl/authentication_sasl_scram.html), [`SCRAM-SHA-512`](https://docs.confluent.io/platform/current/kafka/authentication_sasl/authentication_sasl_scram.html), or [`PLAIN`](https://docs.confluent.io/current/kafka/authentication_sasl/authentication_sasl_plain.html). A `sasl_user` and `sasl_password` are required.

                              **Default:** `PLAIN` -`sasl_user` | Your SASL username. -`sasl_password` | Your SASL password -`insecure_tls_skip_verify` | If `true`, disable client-side validation of responses. Note that a CA certificate is still required; this parameter means that the client will not verify the certificate. **Warning:** Use this query parameter with caution, as it creates [MITM](https://en.wikipedia.org/wiki/Man-in-the-middle_attack) vulnerabilities unless combined with another method of authentication.

                              **Default:** `false` - -{% include {{ page.version.version }}/cdc/options-table-note.md %} - -### Topic naming - -By default, a Kafka topic has the same name as the table on which a changefeed was created. If you create a changefeed on multiple tables, the changefeed will write to multiple topics corresponding to those table names. When you run `CREATE CHANGEFEED` to a Kafka sink, the output will display the job ID as well as the topic name(s) that the changefeed will emit to. - -To modify the default topic naming, you can specify a [topic prefix](create-changefeed.html#topic-prefix-param), [an arbitrary topic name](create-changefeed.html#topic-name-param), or use the [`full_table_name` option](create-changefeed.html#full-table-option). Using the [`topic_name`](create-changefeed.html#topic-name-param) parameter, you can specify an arbitrary topic name and feed all tables into that topic. - -You can either manually create a topic in your Kafka cluster before starting the changefeed, or the topic will be automatically created when the changefeed connects to your Kafka cluster. - -{{site.data.alerts.callout_info}} -You must have the Kafka cluster setting [`auto.create.topics.enable`](https://kafka.apache.org/documentation/#brokerconfigs_auto.create.topics.enable) set to `true` for automatic topic creation. This will create the topic when the changefeed sends its first message. If you create the consumer before that, you will also need the Kafka consumer configuration [`allow.auto.create.topics`](https://kafka.apache.org/documentation/#consumerconfigs_allow.auto.create.topics) to be set to `true`. -{{site.data.alerts.end}} - -Kafka has the following topic limitations: - -- [Legal characters](https://github.com/apache/kafka/blob/0.10.2/core/src/main/scala/kafka/common/Topic.scala#L29) are numbers, letters, and `[._-]`. -- The maximum character length of a topic name is 249. -- Topics with a period (`.`) and underscore (`_`) can collide on internal Kafka data structures, so you should use either but not both. -- Characters not accepted by Kafka will be automatically encoded as unicode characters by CockroachDB. - -### Kafka sink configuration - - The `kafka_sink_config` option allows configuration of a changefeed's message delivery, Kafka server version, and batching parameters. - -{{site.data.alerts.callout_danger}} -Each of the following settings have significant impact on a changefeed's behavior, such as latency. For example, it is possible to configure batching parameters to be very high, which would negatively impact changefeed latency. As a result it would take a long time to see messages coming through to the sink. Also, large batches may be rejected by the Kafka server unless it's separately configured to accept a high [`max.message.bytes`](https://kafka.apache.org/documentation/#brokerconfigs_message.max.bytes). -{{site.data.alerts.end}} - -~~~ -kafka_sink_config='{"Flush": {"MaxMessages": 1, "Frequency": "1s"}, "Version": "0.8.2.0", "RequiredAcks": "ONE", "Compression": "GZIP" }' -~~~ - -`"Flush"."MaxMessages"` and `"Flush"."Frequency"` are configurable batching parameters depending on latency and throughput needs. For example, if `"MaxMessages"` is set to 1000 and `"Frequency"` to 1 second, it will flush to Kafka either after 1 second or after 1000 messages are batched, whichever comes first. It's important to consider that if there are not many messages, then a `"1s"` frequency will add 1 second latency. However, if there is a larger influx of messages these will be flushed quicker. - -Using the default values or not setting fields in `kafka_sink_config` will mean that changefeed messages emit immediately. - -The configurable fields are as follows: - -Field | Type | Description | Default --------------------+---------------------+------------------+------------------- -`Flush.MaxMessages` | [`INT`](int.html) | Sets the maximum number of messages the producer can send in a single broker request. Any messages beyond the configured limit will be blocked. Increasing this value allows all messages to be sent in a batch. | `1000` -`Flush.Messages` | [`INT`](int.html) | Configure the number of messages the changefeed should batch before flushing. | `0` -`Flush.Bytes` | [`INT`](int.html) | When the total byte size of all the messages in the batch reaches this amount, it should be flushed. | `0` -`Flush.Frequency` | [Duration string](https://pkg.go.dev/time#ParseDuration) | When this amount of time has passed since the **first** received message in the batch without it flushing, it should be flushed. | `"0s"` -`"Version"` | [`STRING`](string.html) | Sets the appropriate Kafka cluster version, which can be used to connect to [Kafka versions < v1.0](https://docs.confluent.io/platform/current/installation/versions-interoperability.html) (`kafka_sink_config='{"Version": "0.8.2.0"}'`). | `"1.0.0.0"` -`"RequiredAcks"` | [`STRING`](string.html) | Specifies what a successful write to Kafka is. CockroachDB [guarantees at least once delivery of messages](changefeed-messages.html#ordering-guarantees) — this value defines the **delivery**. The possible values are:

                              `"ONE"`: a write to Kafka is successful once the leader node has committed and acknowledged the write. Note that this has the potential risk of dropped messages; if the leader node acknowledges before replicating to a quorum of other Kafka nodes, but then fails.

                              `"NONE"`: no Kafka brokers are required to acknowledge that they have committed the message. This will decrease latency and increase throughput, but comes at the cost of lower consistency.

                              `"ALL"`: a quorum must be reached (that is, most Kafka brokers have committed the message) before the leader can acknowledge. This is the highest consistency level. | `"ONE"` -`"Compression"` | [`STRING`](string.html) | New in v22.1.12: Sets a compression protocol that the changefeed should use when emitting events. The possible values are: `"NONE"`, `"GZIP"`, `"SNAPPY"`, `"LZ4"`, `"ZSTD"`. Note that the values must be capitalized. | `"NONE"` - -### Kafka sink messages - -The following shows the [Avro](changefeed-messages.html#avro) messages for a changefeed emitting to Kafka: - -~~~ -{ - "after":{ - "users":{ - "name":{ - "string":"Michael Clark" - }, - "address":{ - "string":"85957 Ashley Junctions" - }, - "credit_card":{ - "string":"4144089313" - }, - "id":{ - "string":"d84cf3b6-7029-4d4d-aa81-e5caa9cce09e" - }, - "city":{ - "string":"seattle" - } - } - }, - "updated":{ - "string":"1659643584586630201.0000000000" - } - } - { - "after":{ - "users":{ - "address":{ - "string":"17068 Christopher Isle" - }, - "credit_card":{ - "string":"6664835435" - }, - "id":{ - "string":"11b99275-92ce-4244-be61-4dae21973f87" - }, - "city":{ - "string":"amsterdam" - }, - "name":{ - "string":"John Soto" - } - } - }, - "updated":{ - "string":"1659643585384406152.0000000000" - } - } -~~~ - -See the [Changefeed Examples](changefeed-examples.html) page and the [Stream a Changefeed to a Confluent Cloud Kafka Cluster](stream-a-changefeed-to-a-confluent-cloud-kafka-cluster.html) tutorial for examples to set up a Kafka sink. - -{% include {{ page.version.version }}/cdc/note-changefeed-message-page.md %} - -## Google Cloud Pub/Sub - -{{site.data.alerts.callout_info}} -{% include feature-phases/preview.md %} -{{site.data.alerts.end}} - -{% include_cached new-in.html version="v22.1" %} Changefeeds can deliver messages to a Google Cloud Pub/Sub sink, which is integrated with Google Cloud Platform. - -A Pub/Sub sink URI follows this example: - -~~~ -'gcpubsub://{project name}?region={region}&topic_name={topic name}&AUTH=specified&CREDENTIALS={base64-encoded key}' -~~~ - - - -URI Parameter | Description --------------------+------------------------------------------------------------------ -`project name` | The [Google Cloud Project](https://cloud.google.com/resource-manager/docs/creating-managing-projects) name. -`region` | (Required) The single region to which all output will be sent. -`topic_name` | (Optional) The topic name to which messages will be sent. See the following section on [Topic Naming](#topic-naming) for detail on how topics are created. -`AUTH` | The authentication parameter can define either `specified` (default) or `implicit` authentication. To use `specified` authentication, pass your [Service Account](https://cloud.google.com/iam/docs/understanding-service-accounts) credentials with the URI. To use `implicit` authentication, configure these credentials via an environment variable. See [Use Cloud Storage for Bulk Operations](use-cloud-storage-for-bulk-operations.html#authentication) for examples of each of these. -`CREDENTIALS` | (Required with `AUTH=specified`) The base64-encoded credentials of your Google [Service Account](https://cloud.google.com/iam/docs/understanding-service-accounts) credentials. - -{% include {{ page.version.version }}/cdc/options-table-note.md %} - -When using Pub/Sub as your downstream sink, consider the following: - -- It only supports `JSON` message format. -- Your Google Service Account must have the [Pub/Sub Editor](https://cloud.google.com/iam/docs/understanding-roles#pub-sub-roles) role assigned at the [project level](https://cloud.google.com/resource-manager/docs/access-control-proj#using_predefined_roles). -- You must specify the `region` parameter in the URI to maintain [ordering guarantees](changefeed-messages.html#ordering-guarantees). Unordered messages are not supported, see [Known Limitations](change-data-capture-overview.html#known-limitations) for more information. -- Changefeeds connecting to a Pub/Sub sink do not support the `topic_prefix` option. - -For more information, read about compatible changefeed [options](create-changefeed.html#options) and the [Create a changefeed connected to a Google Cloud Pub/Sub sink](changefeed-examples.html#create-a-changefeed-connected-to-a-google-cloud-pub-sub-sink) example. - -### Pub/Sub topic naming - -When running a `CREATE CHANGEFEED` statement to Pub/Sub, it will try to create a topic automatically. When you do not specify the topic in the URI with the [`topic_name`](create-changefeed.html#topic-name-param) parameter, the changefeed will use the table name to create the topic name. If the topic already exists in your Pub/Sub sink, the changefeed will write to it. You can also use the [`full_table_name`](create-changefeed.html#full-table-option) option to create a topic using the fully qualified table name. - -The output from `CREATE CHANGEFEED` will display the job ID as well as the topic name(s) that the changefeed will emit to. - -You can manually create a topic in your Pub/Sub sink before starting the changefeed. See the [Creating a changefeed to Google Cloud Pub/Sub](changefeed-examples.html#create-a-changefeed-connected-to-a-google-cloud-pub-sub-sink) example for more detail. To understand restrictions on user-specified topic names, see Google's documentation on [Guidelines to name a topic or subscription](https://cloud.google.com/pubsub/docs/admin#resource_names). - -For a list of compatible parameters and options, see [Parameters](create-changefeed.html#parameters) on the `CREATE CHANGEFEED` page. - -### Pub/Sub sink messages - -The following shows the default JSON messages for a changefeed emitting to Pub/Sub. These changefeed messages were emitted as part of the [Create a changefeed connected to a Google Cloud Pub/Sub sink](changefeed-examples.html#create-a-changefeed-connected-to-a-google-cloud-pub-sub-sink) example: - -~~~ -┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────┬─────────────────────────────────────────────────────────┬────────────┬──────────────────┐ -│ DATA │ MESSAGE_ID │ ORDERING_KEY │ ATTRIBUTES │ DELIVERY_ATTEMPT │ -├──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────┼─────────────────────────────────────────────────────────┼────────────┼──────────────────┤ -│ {"key":["boston","40ef7cfa-5e16-4bd3-9e14-2f23407a66df"],"value":{"after":{"address":"14980 Gentry Plains Apt. 64","city":"boston","credit_card":"2466765790","id":"40ef7cfa-5e16-4bd3-9e14-2f23407a66df","name":"Vickie Fitzpatrick"}},"topic":"movr-users"} │ 4466153049158588 │ ["boston", "40ef7cfa-5e16-4bd3-9e14-2f23407a66df"] │ │ │ -│ {"key":["los angeles","947ae147-ae14-4800-8000-00000000001d"],"value":{"after":{"address":"35627 Chelsey Tunnel Suite 94","city":"los angeles","credit_card":"2099932769","id":"947ae147-ae14-4800-8000-00000000001d","name":"Kenneth Barnes"}},"topic":"movr-users"} │ 4466144577818136 │ ["los angeles", "947ae147-ae14-4800-8000-00000000001d"] │ │ │ -│ {"key":["amsterdam","c28f5c28-f5c2-4000-8000-000000000026"],"value":{"after":{"address":"14729 Karen Radial","city":"amsterdam","credit_card":"5844236997","id":"c28f5c28-f5c2-4000-8000-000000000026","name":"Maria Weber"}},"topic":"movr-users"} │ 4466151194002912 │ ["amsterdam", "c28f5c28-f5c2-4000-8000-000000000026"] │ │ │ -│ {"key":["new york","6c8ab772-584a-439d-b7b4-fda37767c74c"],"value":{"after":{"address":"34196 Roger Row Suite 6","city":"new york","credit_card":"3117945420","id":"6c8ab772-584a-439d-b7b4-fda37767c74c","name":"James Lang"}},"topic":"movr-users"} │ 4466147099992681 │ ["new york", "6c8ab772-584a-439d-b7b4-fda37767c74c"] │ │ │ -│ {"key":["boston","c56dab0a-63e7-4fbb-a9af-54362c481c41"],"value":{"after":{"address":"83781 Ross Overpass","city":"boston","credit_card":"7044597874","id":"c56dab0a-63e7-4fbb-a9af-54362c481c41","name":"Mark Butler"}},"topic":"movr-users"} │ 4466150752442731 │ ["boston", "c56dab0a-63e7-4fbb-a9af-54362c481c41"] │ │ │ -│ {"key":["amsterdam","f27e09d5-d7cd-4f88-8b65-abb910036f45"],"value":{"after":{"address":"77153 Donald Road Apt. 62","city":"amsterdam","credit_card":"7531160744","id":"f27e09d5-d7cd-4f88-8b65-abb910036f45","name":"Lisa Sandoval"}},"topic":"movr-users"} │ 4466147182359256 │ ["amsterdam", "f27e09d5-d7cd-4f88-8b65-abb910036f45"] │ │ │ -│ {"key":["new york","46d200c0-6924-4cc7-b3c9-3398997acb84"],"value":{"after":{"address":"92843 Carlos Grove","city":"new york","credit_card":"8822366402","id":"46d200c0-6924-4cc7-b3c9-3398997acb84","name":"Mackenzie Malone"}},"topic":"movr-users"} │ 4466142864542016 │ ["new york", "46d200c0-6924-4cc7-b3c9-3398997acb84"] │ │ │ -│ {"key":["boston","52ecbb26-0eab-4e0b-a160-90caa6a7d350"],"value":{"after":{"address":"95044 Eric Corner Suite 33","city":"boston","credit_card":"3982363300","id":"52ecbb26-0eab-4e0b-a160-90caa6a7d350","name":"Brett Porter"}},"topic":"movr-users"} │ 4466152539161631 │ ["boston", "52ecbb26-0eab-4e0b-a160-90caa6a7d350"] │ │ │ -│ {"key":["amsterdam","ae147ae1-47ae-4800-8000-000000000022"],"value":{"after":{"address":"88194 Angela Gardens Suite 94","city":"amsterdam","credit_card":"4443538758","id":"ae147ae1-47ae-4800-8000-000000000022","name":"Tyler Dalton"}},"topic":"movr-users"} │ 4466151398997150 │ ["amsterdam", "ae147ae1-47ae-4800-8000-000000000022"] │ │ │ -│ {"key":["paris","dc28f5c2-8f5c-4800-8000-00000000002b"],"value":{"after":{"address":"2058 Rodriguez Stream","city":"paris","credit_card":"9584502537","id":"dc28f5c2-8f5c-4800-8000-00000000002b","name":"Tony Ortiz"}},"topic":"movr-users"} │ 4466146372222914 │ ["paris", "dc28f5c2-8f5c-4800-8000-00000000002b"] │ │ │ -└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────┴─────────────────────────────────────────────────────────┴────────────┴──────────────────┘ -~~~ - -{% include {{ page.version.version }}/cdc/note-changefeed-message-page.md %} - -## Cloud storage sink - -Use a cloud storage sink to deliver changefeed data to OLAP or big data systems without requiring transport via Kafka. - -Some considerations when using cloud storage sinks: - -- Cloud storage sinks only work with `JSON` and emit newline-delimited `JSON` files. -- Cloud storage sinks can be configured to store emitted changefeed messages in one or more subdirectories organized by date. See [file partitioning](#partition-format) and the [General file format](create-changefeed.html#general-file-format) examples. -- The supported cloud schemes are: `s3`, `gs`, `azure`, `http`, and `https`. -- Both `http://` and `https://` are cloud storage sinks, **not** webhook sinks. It is necessary to prefix the scheme with `webhook-` for [webhook sinks](#webhook-sink). - -Examples of supported cloud storage sink URIs: - -### Amazon S3 - -~~~ -'s3://{BUCKET NAME}/{PATH}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' -~~~ - -### Azure Storage - -~~~ -'azure://{CONTAINER NAME}/{PATH}?AZURE_ACCOUNT_NAME={ACCOUNT NAME}&AZURE_ACCOUNT_KEY={URL-ENCODED KEY}' -~~~ - -### Google Cloud Storage - -~~~ -'gs://{BUCKET NAME}/{PATH}?AUTH=specified&CREDENTIALS={ENCODED KEY}' -~~~ - -### HTTP - -~~~ -'http://localhost:8080/{PATH}' -~~~ - -### Cloud storage parameters - -The following table lists the available parameters for cloud storage sink URIs: - -URI Parameter | Storage | Description --------------------+------------------------+--------------------------- -`AWS_ACCESS_KEY_ID` | AWS | The access key ID to your AWS account. -`AWS_SECRET_ACCESS_KEY` | AWS | The secret access key to your AWS account. -`AUTH` | AWS S3, GCS | The authentication parameter can define either `specified` (default) or `implicit` authentication. To use `specified` authentication, pass your account credentials with the URI. To use `implicit` authentication, configure these credentials via an environment variable. See [Use Cloud Storage for Bulk Operations](use-cloud-storage-for-bulk-operations.html) for examples of each of these. -`AZURE_ACCOUNT_NAME` | Azure | The name of your Azure account. -`AZURE_ACCOUNT_KEY` | Azure | The URL-encoded account key for your Azure account. -`AZURE_ENVIRONMENT` | Azure | {% include {{ page.version.version }}/misc/azure-env-param.md %} -`CREDENTIALS` | GCS | (Required with `AUTH=specified`) The base64-encoded credentials of your Google [Service Account](https://cloud.google.com/iam/docs/understanding-service-accounts) credentials. -`file_size` | All | The file will be flushed (i.e., written to the sink) when it exceeds the specified file size. This can be used with the [`WITH resolved` option](create-changefeed.html#options), which flushes on a specified cadence.

                              **Default:** `16MB` -`partition_format` | All | Specify how changefeed [file paths](create-changefeed.html#general-file-format) are partitioned in cloud storage sinks. Use `partition_format` with the following values:

                              • `daily` is the default behavior that organizes directories by dates (`2022-05-18/`, `2022-05-19/`, etc.).
                              • `hourly` will further organize directories by hour within each date directory (`2022-05-18/06`, `2022-05-18/07`, etc.).
                              • `flat` will not partition the files at all.

                              For example: `CREATE CHANGEFEED FOR TABLE users INTO 'gs://...?AUTH...&partition_format=hourly'`

                              **Default:** `daily` -`S3_storage_class` | AWS S3 | Specify the S3 storage class for files created by the changefeed. See [Create a changefeed with an S3 storage class](create-changefeed.html#create-a-changefeed-with-an-s3-storage-class) for the available classes and an example.

                              **Default:** `STANDARD` -`topic_prefix` | All | Adds a prefix to all topic names.

                              For example, `CREATE CHANGEFEED FOR TABLE foo INTO 's3://...?topic_prefix=bar_'` would emit rows under the topic `bar_foo` instead of `foo`. - -{% include {{ page.version.version }}/cdc/options-table-note.md %} - -[Use Cloud Storage for Bulk Operations](use-cloud-storage-for-bulk-operations.html#authentication) provides more detail on authentication to cloud storage sinks. - -### Cloud storage sink messages - -The following shows the default JSON messages for a changefeed emitting to a cloud storage sink: - -~~~ -{ - "after":{ - "address":"51438 Janet Valleys", - "city":"boston", - "credit_card":"0904722368", - "id":"33333333-3333-4400-8000-00000000000a", - "name":"Daniel Hernandez MD" - }, - "key":[ - "boston", - "33333333-3333-4400-8000-00000000000a" - ] - } - { - "after":{ - "address":"15074 Richard Falls", - "city":"boston", - "credit_card":"0866384459", - "id":"370117cf-d77d-4778-b0b9-01ac17c15a06", - "name":"Cheyenne Morales" - }, - "key":[ - "boston", - "370117cf-d77d-4778-b0b9-01ac17c15a06" - ] - } - { - "after":{ - "address":"69687 Jessica Islands Apt. 68", - "city":"boston", - "credit_card":"6837062320", - "id":"3851eb85-1eb8-4200-8000-00000000000b", - "name":"Sarah Wang DDS" - }, - "key":[ - "boston", - "3851eb85-1eb8-4200-8000-00000000000b" - ] - } -. . . -~~~ - -{% include {{ page.version.version }}/cdc/note-changefeed-message-page.md %} - -## Webhook sink - -{{site.data.alerts.callout_info}} -{% include feature-phases/preview.md %} -{{site.data.alerts.end}} - -Use a webhook sink to deliver changefeed messages to an arbitrary HTTP endpoint. - -Example of a webhook sink URL: - -~~~ -'webhook-https://{your-webhook-endpoint}?insecure_tls_skip_verify=true' -~~~ - -The following table lists the parameters you can use in your webhook URI: - -URI Parameter | Description --------------------+------------------------------------------------------------------ -`ca_cert` | The base64-encoded `ca_cert` file. Specify `ca_cert` for a webhook sink.

                              Note: To encode your `ca.cert`, run `base64 -w 0 ca.cert`. -`client_cert` | The base64-encoded Privacy Enhanced Mail (PEM) certificate. This is used with `client_key`. -`client_key` | The base64-encoded private key for the PEM certificate. This is used with `client_cert`.

                              {% include {{ page.version.version }}/cdc/client-key-encryption.md %} -`insecure_tls_skip_verify` | If `true`, disable client-side validation of responses. Note that a CA certificate is still required; this parameter means that the client will not verify the certificate. **Warning:** Use this query parameter with caution, as it creates [MITM](https://en.wikipedia.org/wiki/Man-in-the-middle_attack) vulnerabilities unless combined with another method of authentication.

                              **Default:** `false` - -{% include {{ page.version.version }}/cdc/options-table-note.md %} - -The following are considerations when using the webhook sink: - -* Only supports HTTPS. Use the [`insecure_tls_skip_verify`](create-changefeed.html#tls-skip-verify) parameter when testing to disable certificate verification; however, this still requires HTTPS and certificates. -* Only supports JSON output format. -* There is no concurrency configurability. - -### Webhook sink configuration - - The `webhook_sink_config` option allows the changefeed flushing and retry behavior of your webhook sink to be configured. - -The following details the configurable fields: - -Field | Type | Description | Default --------------------+---------------------+------------------+------------------- -`Flush.Messages` | [`INT`](int.html) | When the batch reaches this configured size, it should be flushed (batch sent). | `0` -`Flush.Bytes` | [`INT`](int.html) | When the total byte size of all the messages in the batch reaches this amount, it should be flushed. | `0` -`Flush.Frequency` | [`INTERVAL`](interval.html) | When this amount of time has passed since the **first** received message in the batch without it flushing, it should be flushed. | `"0s"` -`Retry.Max` | [`INT`](int.html) or [`STRING`](string.html) | The maximum amount of time the sink will retry a single HTTP request to send a batch. This value must be positive (> 0). If infinite retries are desired, use `inf`. | `"0s"` -`Retry.Backoff` | [`INTERVAL`](interval.html) | The initial backoff the sink will wait after the first failure. The backoff will double (exponential backoff strategy), until the max is hit. | `"500ms"` - -{{site.data.alerts.callout_danger}} -Setting either `Messages` or `Bytes` with a non-zero value without setting `Frequency`, will cause the sink to assume `Frequency` has an infinity value. If either `Messages` or `Bytes` have a non-zero value, then a non-zero value for `Frequency` **must** be provided. This configuration is invalid and will cause an error, since the messages could sit in a batch indefinitely if the other conditions do not trigger. -{{site.data.alerts.end}} - -Some complexities to consider when setting `Flush` fields for batching: - -- When all batching parameters are zero (`"Messages"`, `"Bytes"`, and `"Frequency"`) the sink will interpret this configuration as "send batch every time." This would be the same as not providing any configuration at all: - -~~~ -{ - "Flush": { - "Messages": 0, - "Bytes": 0, - "Frequency": "0s" - } -} -~~~ - -- If one or more fields are set as non-zero values, any fields with a zero value the sink will interpret as infinity. For example, in the following configuration, the sink will send a batch whenever the size reaches 100 messages, **or**, when 5 seconds has passed since the batch was populated with its first message. `Bytes` defaults to `0` in this case, so a batch will never trigger due to a configured byte size: - -~~~ -{ - "Flush": { - "Messages": 100, - "Frequency": "5s" - } -} -~~~ - -### Webhook sink messages - -The following shows the default JSON messages for a changefeed emitting to a webhook sink. These changefeed messages were emitted as part of the [Create a changefeed connected to a Webhook sink](changefeed-examples.html#create-a-changefeed-connected-to-a-webhook-sink) example: - -~~~ -"2021/08/24 14":"00":21 -{ - "payload":[ - { - "after":{ - "city":"rome", - "creation_time":"2019-01-02T03:04:05", - "current_location":"39141 Travis Curve Suite 87", - "ext":{ - "brand":"Schwinn", - "color":"red" - }, - "id":"d7b18299-c0c4-4304-9ef7-05ae46fd5ee1", - "dog_owner_id":"5d0c85b5-8866-47cf-a6bc-d032f198e48f", - "status":"in_use", - "type":"bike" - }, - "key":[ - "rome", - "d7b18299-c0c4-4304-9ef7-05ae46fd5ee1" - ], - "topic":"vehicles", - "updated":"1629813621680097993.0000000000" - } - ], - "length":1 - } - - "2021/08/24 14":"00":22 - { - "payload":[ - { - "after":{ - "city":"san francisco", - "creation_time":"2019-01-02T03:04:05", - "current_location":"84888 Wallace Wall", - "ext":{ - "color":"black" - }, - "id":"020cf7f4-6324-48a0-9f74-6c9010fb1ab4", - "dog_owner_id":"b74ea421-fcaf-4d80-9dcc-d222d49bdc17", - "status":"available", - "type":"scooter" - }, - "key":[ - "san francisco", - "020cf7f4-6324-48a0-9f74-6c9010fb1ab4" - ], - "topic":"vehicles", - "updated":"1629813621680097993.0000000000" - } - ], - "length":1 - } -~~~ - -{% include {{ page.version.version }}/cdc/note-changefeed-message-page.md %} - -## See also - -- [Use Cloud Storage for Bulk Operations](use-cloud-storage-for-bulk-operations.html) -- [`CREATE CHANGEFEED`](create-changefeed.html) diff --git a/src/current/v22.1/changefeeds-in-multi-region-deployments.md b/src/current/v22.1/changefeeds-in-multi-region-deployments.md deleted file mode 100644 index 9d293eb78c5..00000000000 --- a/src/current/v22.1/changefeeds-in-multi-region-deployments.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -title: Changefeeds in Multi-Region Deployments -summary: Understand limitations and usage of changefeeds in multi-region delpoyments. -toc: true -docs_area: stream_data ---- - - Changefeeds are supported on [regional by row tables](multiregion-overview.html#regional-by-row-tables). When working with changefeeds on regional by row tables, it is necessary to consider the following: - -- Setting a table's locality to [`REGIONAL BY ROW`](set-locality.html#regional-by-row) is equivalent to a [schema change](online-schema-changes.html) as the [`crdb_region` column](set-locality.html#crdb_region) becomes a hidden column for each of the rows in the table and is part of the [primary key](primary-key.html). Therefore, when existing tables targeted by changefeeds are made regional by row, it will trigger a backfill of the table through the changefeed. (See [Schema changes with a column backfill](changefeed-messages.html#schema-changes-with-column-backfill) for more details on the effects of schema changes on changefeeds.) - -{{site.data.alerts.callout_info}} -If the [`schema_change_policy`](create-changefeed.html#options) changefeed option is configured to `stop`, the backfill will cause the changefeed to fail. -{{site.data.alerts.end}} - -- Setting a table to `REGIONAL BY ROW` will have an impact on the changefeed's output as a result of the schema change. The backfill and future updated or inserted rows will emit output that includes the newly added `crdb_region` column as part of the schema. Therefore, it is necessary to ensure that programs consuming the changefeed can manage the new format of the primary keys. - -- [Changing a row's region](set-locality.html#update-a-rows-home-region) will appear as an insert and delete in the emitted changefeed output. For example, in the following output in which the region has been updated to `us-east1`, the insert messages are emitted followed by the [delete messages](changefeed-messages.html#delete-messages): - -~~~ -. . . -{"after": {"city": "washington dc", "crdb_region": "us-east1", "creation_time": "2019-01-02T03:04:05", "current_location": "52372 Katherine Plains", "ext": {"color": "black"}, "id": "54a69217-35ee-4000-8000-0000000001f0", "owner_id": "3dcc63f1-4120-4c00-8000-0000000004b7", "status": "in_use", "type": "scooter"}, "updated": "1632241564629087669.0000000000"} -{"after": {"city": "washington dc", "crdb_region": "us-east1", "creation_time": "2019-01-02T03:04:05", "current_location": "75024 Patrick Bridge", "ext": {"color": "black"}, "id": "54d242e6-bdc8-4400-8000-0000000001f1", "owner_id": "3ab9f559-b3d0-4c00-8000-00000000047b", "status": "in_use", "type": "scooter"}, "updated": "1632241564629087669.0000000000"} -{"after": {"city": "washington dc", "crdb_region": "us-east1", "creation_time": "2019-01-02T03:04:05", "current_location": "45597 Jackson Inlet", "ext": {"brand": "Schwinn", "color": "red"}, "id": "54fdf3b6-45a1-4c00-8000-0000000001f2", "owner_id": "4339c0eb-edfa-4400-8000-000000000521", "status": "in_use", "type": "bike"}, "updated": "1632241564629087669.0000000000"} -{"after": {"city": "washington dc", "crdb_region": "us-east1", "creation_time": "2019-01-02T03:04:05", "current_location": "18336 Katherine Port", "ext": {"color": "yellow"}, "id": "5529a485-cd7b-4000-8000-0000000001f3", "owner_id": "452bd3c3-6113-4000-8000-000000000547", "status": "in_use", "type": "scooter"}, "updated": "1632241564629087669.0000000000"} -{"after": null, "updated": "1632241564629087669.0000000000"} -{"after": null, "updated": "1632241564629087669.0000000000"} -{"after": null, "updated": "1632241564629087669.0000000000"} -{"after": null, "updated": "1632241564629087669.0000000000"} -. . . -~~~ - -See the changefeed [responses](changefeed-messages.html#responses) section for more general information on the messages emitted from a changefeed. - -## See also - -- [Changefeed Messages](changefeed-messages.html) -- [`SET LOCALITY`](set-locality.html) -- [Multi-Region Overview](multiregion-overview.html) -- [Primary Key Constraint](primary-key.html) diff --git a/src/current/v22.1/changefeeds-on-tables-with-column-families.md b/src/current/v22.1/changefeeds-on-tables-with-column-families.md deleted file mode 100644 index afe159e3729..00000000000 --- a/src/current/v22.1/changefeeds-on-tables-with-column-families.md +++ /dev/null @@ -1,490 +0,0 @@ ---- -title: Changefeeds on Tables with Column Families -summary: Understand how changefeeds work on tables with column families. -toc: true -docs_area: stream_data ---- - -{% include_cached new-in.html version="v22.1" %} You can create changefeeds on tables with more than one [column family](column-families.html). Changefeeds will emit individual messages per column family on a table. - -For further detail, see the following sections: - -- [Syntax](#syntax) -- [Message format](#message-format) -- [Examples](#create-a-changefeed-on-a-table-with-column-families) - -## Syntax - -To target a table with multiple column families, set the [`split_column_families` option](create-changefeed.html#split-column-families) when creating a changefeed: - -~~~ sql -CREATE CHANGEFEED FOR TABLE {table} INTO {sink} WITH split_column_families; -~~~ - -To emit messages for a specific column family, use the `FAMILY` keyword: - -~~~ sql -CREATE CHANGEFEED FOR TABLE {table} FAMILY {family} INTO {sink}; -~~~ - -{{site.data.alerts.callout_info}} -You can also use [Core changefeeds](changefeeds-on-tables-with-column-families.html?filters=core#create-a-core-changefeed-on-a-table-with-column-families) on tables with column families by using the [`EXPERIMENTAL CHANGEFEED FOR`](changefeed-for.html) statement with `split_column_families` or the `FAMILY` keyword. -{{site.data.alerts.end}} - -If a table has multiple column families, the `FAMILY` keyword will ensure the changefeed emits messages for **each** column family you define with `FAMILY` in the `CREATE CHANGEFEED` statement. If you do not specify `FAMILY`, then the changefeed will emit messages for **all** the table's column families. - -To specify multiple families on the same table, it is necessary to define the table and family in both instances: - -~~~ sql -CREATE CHANGEFEED FOR TABLE tbl FAMILY f_1, TABLE tbl FAMILY f_2; -~~~ - -## Message format - -The response will follow a typical [changefeed message format](changefeed-messages.html#responses), but with the family name appended to the table name with a `.`, in the format `table.family`: - -~~~ -{"after":{"column":"value"},"key":[1],"topic":"table.family"} -~~~ - -For [cloud storage sinks](changefeed-sinks.html#cloud-storage-sink), the filename will include the family name appended to the table name with a `+`, in the format `table+primary`. - -[Avro](changefeed-messages.html#avro) schema names will include the family name concatenated to the table name. - -The primary key columns will appear in the `key` for **all** column families, and will also appear in the value **only** for the families that they are a member of. - -For example, if the table `office_dogs` has a column family `primary`, containing the primary key and a `STRING` column, and a `secondary` column family containing a different `STRING` column, then you'll receive two messages for an insert. - -~~~ sql -CREATE TABLE office_dogs ( - id INT PRIMARY KEY, - name STRING, - owner STRING, - FAMILY primary (id, name), - FAMILY secondary (owner) - ); -~~~ - -The changefeed targeting this table (started with `split_column_families`) will emit the following when there are inserts to the table: - -~~~ -{"after":{"id":4,"name":"Toby"},"key":[4],"topic":"office_dogs.primary"}],"length":1} -{"after":{"owner":"Ashley"},"key":[4],"topic":"office_dogs.secondary"}],"length":1} -~~~ - -The output shows the `primary` column family with `4` in the value (`{"id":4,"name":"Toby"}`) and the key (`"key":[4]`). The `secondary` family doesn't contain the `id` column, so the primary key `4` is only in the key and **not** the value. For an update that only affects data in one column family, the changefeed will send one message for that update relating to the family. - -## Considerations - -- If you create a table **without** column families and then start a changefeed with the `split_column_families` option, it is not possible to add column families. A subsequent `ALTER TABLE` statement adding a column family to the table will cause the changefeed to fail. -- When you do not specify column family names in the `CREATE` or `ALTER TABLE` statement, the family names will default to either of the following: - - `primary`: Since `primary` is a key word, you'll receive a syntax error if you run `CREATE CHANGEFEED FOR table FAMILY primary`. To avoid this syntax error, use double quotes: `CREATE CHANGEFEED FOR table FAMILY "primary"`. You'll receive output from the changefeed like: `table.primary`. - - `fam__`: For a table that does not include a name for the family: `FAMILY (id, name)`, you'll receive output from the changefeed containing: `table.fam_0_id_name`. This references the table, the family ID and the two columns that this column family includes. - -For examples of starting changefeeds on tables with column families, see the following examples for Enterprise and Core changefeeds. - -
                              - - -
                              - -
                              - -## Create a changefeed on a table with column families - -{{site.data.alerts.callout_info}} -[`CREATE CHANGEFEED`](create-changefeed.html) is an [Enterprise-only](enterprise-licensing.html) feature. For the Core version, see [the `CHANGEFEED FOR` example](changefeeds-on-tables-with-column-families.html?filters=core#create-a-core-changefeed-on-a-table-with-column-families). -{{site.data.alerts.end}} - -{% include_cached new-in.html version="v22.1" %} In this example, you'll set up changefeeds on two tables that have [column families](column-families.html). You'll use a single-node cluster sending changes to a webhook sink for this example, but you can use any [changefeed sink](changefeed-sinks.html) to work with tables that include column families. - -1. If you do not already have one, [request a trial {{ site.data.products.enterprise }} license](enterprise-licensing.html). - -1. Use the [`cockroach start-single-node`](cockroach-start-single-node.html) command to start a single-node cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach start-single-node --insecure --listen-addr=localhost --background - ~~~ - -1. As the `root` user, open the [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach sql --insecure - ~~~ - -1. Set your organization and [Enterprise license](enterprise-licensing.html) key that you received via email: - - {% include_cached copy-clipboard.html %} - ~~~ sql - SET CLUSTER SETTING cluster.organization = ''; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - SET CLUSTER SETTING enterprise.license = ''; - ~~~ - -1. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - SET CLUSTER SETTING kv.rangefeed.enabled = true; - ~~~ - -1. In a separate terminal window, set up your HTTP server. Clone the test repository: - - {% include_cached copy-clipboard.html %} - ~~~shell - git clone https://github.com/cockroachlabs/cdc-webhook-sink-test-server.git - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~shell - cd cdc-webhook-sink-test-server/go-https-server - ~~~ - -1. Next make the script executable and then run the server (passing a specific port if preferred, otherwise it will default to `:3000`): - - {% include_cached copy-clipboard.html %} - ~~~shell - chmod +x ./server.sh - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~shell - ./server.sh - ~~~ - -1. Back in your SQL shell, create a database called `cdc_demo`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE DATABASE cdc_demo; - ~~~ - -1. Set the database as the default: - - {% include_cached copy-clipboard.html %} - ~~~ sql - USE cdc_demo; - ~~~ - -1. Create a table with two column families: - - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE TABLE office_dogs ( - id INT PRIMARY KEY, - name STRING, - dog_owner STRING, - FAMILY dogs (id, name), - FAMILY employee (dog_owner) - ); - ~~~ - -1. Insert some data into the table: - - {% include_cached copy-clipboard.html %} - ~~~ sql - INSERT INTO office_dogs (id, name, dog_owner) VALUES (1, 'Petee', 'Lauren'), (2, 'Max', 'Taylor'), (3, 'Patch', 'Sammy'), (4, 'Roach', 'Ashley'); - ~~~ - -1. Create a second table that also defines column families: - - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE TABLE office_plants ( - id INT PRIMARY KEY, - plant_name STRING, - office_floor INT, - safe_for_dogs BOOL, - FAMILY dog_friendly (office_floor, safe_for_dogs), - FAMILY plant (id, plant_name) - ); - ~~~ - -1. Insert some data into `office_plants`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - INSERT INTO office_plants (id, plant_name, office_floor, safe_for_dogs) VALUES (1, 'Sansevieria', 11, false), (2, 'Monstera', 11, false), (3, 'Peperomia', 10, true), (4, 'Jade', 9, true); - ~~~ - -1. Create a changefeed on the `office_dogs` table targeting one of the column families. Use the `FAMILY` keyword in the `CREATE` statement: - - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE CHANGEFEED FOR TABLE office_dogs FAMILY employee INTO 'webhook-https://localhost:3000?insecure_tls_skip_verify=true'; - ~~~ - - You'll receive one message for each of the inserts that affects the specified column family: - - ~~~ - {"payload":[{"after":{"dog_owner":"Lauren"},"key":[1],"topic":"office_dogs.employee"}],"length":1} - {"payload":[{"after":{"dog_owner":"Sammy"},"key":[3],"topic":"office_dogs.employee"}],"length":1} - {"payload":[{"after":{"dog_owner":"Taylor"},"key":[2],"topic":"office_dogs.employee"}],"length":1} - {"payload":[{"after":{"dog_owner":"Ashley"},"key":[4],"topic":"office_dogs.employee"}],"length":1} - ~~~ - - {{site.data.alerts.callout_info}} - The ordering of messages is not guaranteed. That is, you may not always receive messages for the same row, or even the same change to the same row, next to each other. - {{site.data.alerts.end}} - - Alternatively, create a changefeed using the `FAMILY` keyword across two tables: - - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE CHANGEFEED FOR TABLE office_dogs FAMILY employee, TABLE office_plants FAMILY dog_friendly INTO 'webhook-https://localhost:3000?insecure_tls_skip_verify=true'; - ~~~ - - You'll receive one message for each insert that affects the specified column families: - - ~~~ - {"payload":[{"after":{"dog_owner":"Lauren"},"key":[1],"topic":"office_dogs.employee"}],"length":1} - {"payload":[{"after":{"office_floor":11,"safe_for_dogs":false},"key":[1],"topic":"office_plants.dog_friendly"}],"length":1} - {"payload":[{"after":{"office_floor":9,"safe_for_dogs":true},"key":[4],"topic":"office_plants.dog_friendly"}],"length":1} - {"payload":[{"after":{"dog_owner":"Taylor"},"key":[2],"topic":"office_dogs.employee"}],"length":1} - {"payload":[{"after":{"office_floor":11,"safe_for_dogs":false},"key":[2],"topic":"office_plants.dog_friendly"}],"length":1} - {"payload":[{"after":{"office_floor":10,"safe_for_dogs":true},"key":[3],"topic":"office_plants.dog_friendly"}],"length":1} - {"payload":[{"after":{"dog_owner":"Ashley"},"key":[4],"topic":"office_dogs.employee"}],"length":1} - {"payload":[{"after":{"dog_owner":"Sammy"},"key":[3],"topic":"office_dogs.employee"}],"length":1} - ~~~ - - This allows you to define particular column families for the changefeed to target, without necessarily specifying every family in a table. - - {{site.data.alerts.callout_info}} - To create a changefeed specifying two families on **one** table, ensure that you define the table and family in both instances: - - `CREATE CHANGEFEED FOR TABLE office_dogs FAMILY employee, TABLE office_dogs FAMILY dogs INTO {sink};` - {{site.data.alerts.end}} - -1. To create a changefeed that emits messages for all column families in a table, use the [`split_column_families`](create-changefeed.html#split-column-families) option: - - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE CHANGEFEED FOR TABLE office_dogs INTO 'webhook-https://localhost:3000?insecure_tls_skip_verify=true' with split_column_families; - ~~~ - - You'll receive output for both of the column families in the `office_dogs` table: - - ~~~ - {"payload":[{"after":{"id":1,"name":"Petee"},"key":[1],"topic":"office_dogs.dogs"}],"length":1} - {"payload":[{"after":{"dog_owner":"Lauren"},"key":[1],"topic":"office_dogs.employee"}],"length":1} - {"payload":[{"after":{"id":2,"name":"Max"},"key":[2],"topic":"office_dogs.dogs"}],"length":1} - {"payload":[{"after":{"dog_owner":"Taylor"},"key":[2],"topic":"office_dogs.employee"}],"length":1} - {"payload":[{"after":{"id":3,"name":"Patch"},"key":[3],"topic":"office_dogs.dogs"}],"length":1} - {"payload":[{"after":{"dog_owner":"Sammy"},"key":[3],"topic":"office_dogs.employee"}],"length":1} - {"payload":[{"after":{"id":4,"name":"Roach"},"key":[4],"topic":"office_dogs.dogs"}],"length":1} - {"payload":[{"after":{"dog_owner":"Ashley"},"key":[4],"topic":"office_dogs.employee"}],"length":1} - ~~~ - - {{site.data.alerts.callout_info}} - You can find details of your changefeed job using [`SHOW CHANGEFEED JOBS`](show-jobs.html#show-changefeed-jobs). Changefeeds streaming to [Kafka](changefeed-sinks.html#kafka) or [Google Cloud Pub/Sub](changefeed-sinks.html#google-cloud-pub-sub) will populate the `topics` field in the `SHOW CHANGEFEED JOBS` output. - - When using the `FAMILY` keyword, the `topics` field will display in the format `topic.family`, e.g., `office_dogs.employee,office_dogs.dogs`. With the `split_column_families` option set, `topics` will show the topic name and a family placeholder `topic.{family}`, e.g., `office_dogs.{family}`. - {{site.data.alerts.end}} - -1. Update one of the values in the table: - - {% include_cached copy-clipboard.html %} - ~~~ sql - UPDATE office_dogs SET name = 'Izzy' WHERE id = 4; - ~~~ - - This only affects one column family, which means you'll receive one message: - - ~~~ - {"payload":[{"after":{"id":4,"name":"Izzy"},"key":[4],"topic":"office_dogs.dogs"}],"length":1} - ~~~ - -
                              - -
                              - -## Create a Core changefeed on a table with column families - -{% include_cached new-in.html version="v22.1" %} In this example, you'll set up Core changefeeds on two tables that have [column families](column-families.html). You'll use a single-node cluster with the Core changefeed sending changes to the client. - -1. Use the [`cockroach start-single-node`](cockroach-start-single-node.html) command to start a single-node cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach start-single-node --insecure --listen-addr=localhost --background - ~~~ - -1. As the `root` user, open the [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach sql --url="postgresql://root@127.0.0.1:26257?sslmode=disable" --format=csv - ~~~ - -1. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - SET CLUSTER SETTING kv.rangefeed.enabled = true; - ~~~ - -1. Create a database called `cdc_demo`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE DATABASE cdc_demo; - ~~~ - -1. Set the database as the default: - - {% include_cached copy-clipboard.html %} - ~~~ sql - USE cdc_demo; - ~~~ - -1. Create a table with two column families: - - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE TABLE office_dogs ( - id INT PRIMARY KEY, - name STRING, - dog_owner STRING, - FAMILY dogs (id, name), - FAMILY employee (dog_owner) - ); - ~~~ - -1. Insert some data into the table: - - {% include_cached copy-clipboard.html %} - ~~~ sql - INSERT INTO office_dogs (id, name, dog_owner) VALUES (1, 'Petee', 'Lauren'), (2, 'Max', 'Taylor'), (3, 'Patch', 'Sammy'), (4, 'Roach', 'Ashley'); - ~~~ - -1. Create another table that also defines two column families: - - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE TABLE office_plants ( - id INT PRIMARY KEY, - plant_name STRING, - office_floor INT, - safe_for_dogs BOOL, - FAMILY dog_friendly (office_floor, safe_for_dogs), - FAMILY plant (id, plant_name) - ); - ~~~ - -1. Insert some data into `office_plants`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - INSERT INTO office_plants (id, plant_name, office_floor, safe_for_dogs) VALUES (1, 'Sansevieria', 11, false), (2, 'Monstera', 11, false), (3, 'Peperomia', 10, true), (4, 'Jade', 9, true); - ~~~ - -1. Create a changefeed on the `office_dogs` table targeting one of the column families. Use the `FAMILY` keyword in the statement: - - {% include_cached copy-clipboard.html %} - ~~~ sql - EXPERIMENTAL CHANGEFEED FOR TABLE office_dogs FAMILY employee; - ~~~ - - You'll receive one message for each of the inserts that affects the specified column family: - - ~~~ - table,key,value - office_dogs.employee,[1],"{""after"": {""owner"": ""Lauren""}}" - office_dogs.employee,[2],"{""after"": {""owner"": ""Taylor""}}" - office_dogs.employee,[3],"{""after"": {""owner"": ""Sammy""}}" - office_dogs.employee,[4],"{""after"": {""owner"": ""Ashley""}}" - ~~~ - - {{site.data.alerts.callout_info}} - The ordering of messages is not guaranteed. That is, you may not always receive messages for the same row, or even the same change to the same row, next to each other. - {{site.data.alerts.end}} - - Alternatively, create a changefeed using the `FAMILY` keyword across two tables: - - {% include_cached copy-clipboard.html %} - ~~~ sql - EXPERIMENTAL CHANGEFEED FOR TABLE office_dogs FAMILY employee, TABLE office_plants FAMILY dog_friendly; - ~~~ - - You'll receive one message for each insert that affects the specified column families: - - ~~~ - table,key,value - office_plants.dog_friendly,[1],"{""after"": {""office_floor"": 11, ""safe_for_dogs"": false}}" - office_plants.dog_friendly,[2],"{""after"": {""office_floor"": 11, ""safe_for_dogs"": false}}" - office_plants.dog_friendly,[3],"{""after"": {""office_floor"": 10, ""safe_for_dogs"": true}}" - office_plants.dog_friendly,[4],"{""after"": {""office_floor"": 9, ""safe_for_dogs"": true}}" - office_dogs.employee,[1],"{""after"": {""dog_owner"": ""Lauren""}}" - office_dogs.employee,[2],"{""after"": {""dog_owner"": ""Taylor""}}" - office_dogs.employee,[3],"{""after"": {""dog_owner"": ""Sammy""}}" - office_dogs.employee,[4],"{""after"": {""dog_owner"": ""Ashley""}}" - ~~~ - - This allows you to define particular column families for the changefeed to target, without necessarily specifying every family in a table. - - {{site.data.alerts.callout_info}} - To create a changefeed specifying two families on **one** table, ensure that you define the table and family in both instances: - - `EXPERIMENTAL CHANGEFEED FOR TABLE office_dogs FAMILY employee, TABLE office_dogs FAMILY dogs;` - {{site.data.alerts.end}} - -1. To create a changefeed that emits messages for all column families in a table, use the [`split_column_families`](changefeed-for.html#split-column-families) option: - - {% include_cached copy-clipboard.html %} - ~~~ sql - EXPERIMENTAL CHANGEFEED FOR TABLE office_dogs WITH split_column_families; - ~~~ - - In your other terminal window, insert some more values: - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach sql --insecure -e "INSERT INTO cdc_demo.office_dogs (id, name, dog_owner) VALUES (5, 'Daisy', 'Cameron'), (6, 'Sage', 'Blair'), (7, 'Bella', 'Ellis');" - ~~~ - - Your changefeed will output the following: - - ~~~ - table,key,value - office_dogs.dogs,[1],"{""after"": {""id"": 1, ""name"": ""Petee""}}" - office_dogs.employee,[1],"{""after"": {""owner"": ""Lauren""}}" - office_dogs.dogs,[2],"{""after"": {""id"": 2, ""name"": ""Max""}}" - office_dogs.employee,[2],"{""after"": {""owner"": ""Taylor""}}" - office_dogs.dogs,[3],"{""after"": {""id"": 3, ""name"": ""Patch""}}" - office_dogs.employee,[3],"{""after"": {""owner"": ""Sammy""}}" - office_dogs.dogs,[4],"{""after"": {""id"": 4, ""name"": ""Roach""}}" - office_dogs.employee,[4],"{""after"": {""owner"": ""Ashley""}}" - office_dogs.dogs,[5],"{""after"": {""id"": 5, ""name"": ""Daisy""}}" - office_dogs.employee,[5],"{""after"": {""owner"": ""Cameron""}}" - office_dogs.dogs,[6],"{""after"": {""id"": 6, ""name"": ""Sage""}}" - office_dogs.employee,[6],"{""after"": {""owner"": ""Blair""}}" - office_dogs.dogs,[7],"{""after"": {""id"": 7, ""name"": ""Bella""}}" - office_dogs.employee,[7],"{""after"": {""owner"": ""Ellis""}}" - ~~~ - -1. In your other terminal window, update one of the values in the table: - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach sql --insecure -e "UPDATE cdc_demo.office_dogs SET name = 'Izzy' WHERE id = 4;" - ~~~ - - This only affects one column family, which means you'll receive one message: - - ~~~ - office_dogs.dogs,[4],"{""after"": {""id"": 4, ""name"": ""Izzy""}}" - ~~~ - -
                              - -## See also - -- [`EXPERIMENTAL CHANGEFEED`](changefeed-for.html) -- [`CREATE CHANGEFEED`](create-changefeed.html) -- [Changefeed Sinks](changefeed-sinks.html) -- [Changefeed Examples](changefeed-examples.html) \ No newline at end of file diff --git a/src/current/v22.1/check.md b/src/current/v22.1/check.md deleted file mode 100644 index 747f2587113..00000000000 --- a/src/current/v22.1/check.md +++ /dev/null @@ -1,121 +0,0 @@ ---- -title: CHECK Constraint -summary: The CHECK constraint specifies that values for the column in INSERT or UPDATE statements must satisfy a Boolean expression. -toc: true -docs_area: reference.sql ---- - -The `CHECK` [constraint](constraints.html) specifies that values for the column in [`INSERT`](insert.html) or [`UPDATE`](update.html) statements must return `TRUE` or `NULL` for a Boolean expression. If any values return `FALSE`, the entire statement is rejected. - -## Details - -- You can specify `CHECK` constraints at the column or table level and can reference other columns within the table. Internally, all column-level `CHECK` constraints are converted to table-level constraints so they can be handled consistently. - -- You can add `CHECK` constraints to columns that were created earlier in the same transaction. For an example, see [Add the `CHECK` constraint](add-constraint.html#add-constraints-to-columns-created-during-a-transaction). - -- You can have multiple `CHECK` constraints on a single column but for performance optimization you should combine them using logical operators. For example, you should specify: - - ~~~ sql - warranty_period INT CHECK (warranty_period >= 0) CHECK (warranty_period <= 24) - ~~~ - - as: - - ~~~ sql - warranty_period INT CHECK (warranty_period BETWEEN 0 AND 24) - ~~~ - -- When you drop a column with a `CHECK` constraint, the `CHECK` constraint is also dropped. - -## Syntax - -You can define `CHECK` constraints at the [column level](#column-level), where the constraint applies only to a single column, and at the [table level](#table-level). - -You can also add `CHECK` constraints to a table using [`ADD CONSTRAINT`](add-constraint.html#add-the-check-constraint). - -### Column level - -
                              -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/check_column_level.html %} -
                              - - Parameter | Description ------------|------------- -`table_name` | The name of the table you're creating. -`column_name` | The name of the constrained column. -`column_type` | The constrained column's [data type](data-types.html). -`check_expr` | An expression that returns a Boolean value; if the expression evaluates to `FALSE`, the value cannot be inserted. -`column_constraints` | Any other column-level [constraints](constraints.html) you want to apply to this column. -`column_def` | Definitions for any other columns in the table. -`table_constraints` | Any table-level [constraints](constraints.html) you want to apply. - -#### Example - -The following example specifies the column-level `CHECK` constraint that a `quantity_on_hand` value must be greater than `0`. - -~~~ sql -> CREATE TABLE inventories ( - product_id INT NOT NULL, - warehouse_id INT NOT NULL, - quantity_on_hand INT NOT NULL CHECK (quantity_on_hand > 0), - PRIMARY KEY (product_id, warehouse_id) - ); -~~~ - -### Table level - -
                              -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/check_table_level.html %} -
                              - - Parameter | Description ------------|------------- -`table_name` | The name of the table you're creating. -`column_def` | Definitions for any other columns in the table. -`constraint_name` | The name to use for the constraint, which must be unique to its table and follow these [identifier rules](keywords-and-identifiers.html#identifiers). -`check_expr` | An expression that returns a Boolean value. If the expression evaluates to `FALSE`, the value cannot be inserted. -`table_constraints` | Any other table-level [constraints](constraints.html) to apply. - -#### Example - -The following example specifies the table-level `CHECK` constraint named `ok_to_supply` that a `quantity_on_hand` value must be greater than `0` and a `warehouse_id` must be between `100` and `200`. - -~~~ sql -> CREATE TABLE inventories ( - product_id INT NOT NULL, - warehouse_id INT NOT NULL, - quantity_on_hand INT NOT NULL, - PRIMARY KEY (product_id, warehouse_id), - CONSTRAINT ok_to_supply CHECK (quantity_on_hand > 0 AND warehouse_id BETWEEN 100 AND 200) - ); -~~~ - -## Usage example - -The following example demonstrates that when you specify the `CHECK` constraint that a `quantity_on_hand` value must be greater than `0`, and you attempt to insert the value `0`, CockroachDB returns an error. - -~~~ sql -> CREATE TABLE inventories ( - product_id INT NOT NULL, - warehouse_id INT NOT NULL, - quantity_on_hand INT NOT NULL CHECK (quantity_on_hand > 0), - PRIMARY KEY (product_id, warehouse_id) - ); - -> INSERT INTO inventories (product_id, warehouse_id, quantity_on_hand) VALUES (1, 2, 0); -~~~ -~~~ -pq: failed to satisfy CHECK constraint (quantity_on_hand > 0) -~~~ - - -## See also - -- [Constraints](constraints.html) -- [`DROP CONSTRAINT`](drop-constraint.html) -- [`DEFAULT` constraint](default-value.html) -- [`REFERENCES` constraint (Foreign Key)](foreign-key.html) -- [`NOT NULL` constraint](not-null.html) -- [`PRIMARY KEY` constraint](primary-key.html) -- [`UNIQUE` constraint](unique.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) diff --git a/src/current/v22.1/choose-a-deployment-option.md b/src/current/v22.1/choose-a-deployment-option.md deleted file mode 100644 index 64336605b98..00000000000 --- a/src/current/v22.1/choose-a-deployment-option.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -title: How to Choose a Deployment Option -summary: Learn how to choose between CockroachDB Serverless, Dedicated, and Self-Hosted deployment option. -toc: true -docs_area: deploy ---- - -Cockroach Labs offers three ways to deploy CockroachDB: two managed services—CockroachDB {{ site.data.products.serverless }} and CockroachDB {{ site.data.products.dedicated }}—and a self managed option—CockroachDB Self-Hosted. To help you choose which deployment option will best satisfy your requirements, this page describes the application types each deployment is designed for and lists some of the deployment option features that support the application types. For a full feature comparison list, see [CockroachDB: A cloud native, globally-distributed SQL database](https://www.cockroachlabs.com/get-started-cockroachdb/). - -
- - - - - - - - - - - - - - - - - - - - -
Application typeDeployment optionFeature
    -
  • Lightweight applications, starter projects, development environments, and proofs of concept.
  • -
  • Applications with unpredictable scale or regular peaks and troughs of activity.
  • -
  • Applications that will only need to be deployed in a single region.
  • -
  • Applications with explicit budget constraints.
  • -
    -
  • CockroachDB {{ site.data.products.serverless }}: A fully managed, multi-tenant CockroachDB deployment, in a single region and cloud (AWS or GCP). Delivers an instant, autoscaling database and offers a generous free tier and consumption based billing once free limits are exceeded.
  • -
    -
  • Scale: Automatic transactional capacity scaling (up and down) depending on database activity. Ability to scale down to zero and consume zero resources.
  • -
  • Availability: High availability. Data replication in triplicate within a single region. Ensures outage survival by spreading replicas across availability zones.
  • -
  • Operations: Cockroach Labs SRE team manages and maintains every cluster. Backups every three hours.
  • -
  • Cost: Free for 10 GiB of storage and 50M Request Units. Consumption based billing and resource limits enforce budget requirements.
  • -
  • Resource isolation: Shared CockroachDB software and infrastructure. Data is protected and not shared between deployments.
  • -
  • Support: Provided by CockroachDB community forum and public Slack workspace.
  • -
    -
  • All workloads: lightweight and critical production.
  • -
  • Applications that may need to grow and scale over time.
  • -
  • Applications with current and future requirements to grow into new cloud regions to serve customers in new markets.
  • -
  • Applications that require real-time integration with other systems.
  • -
    -
  • Scale: Node-based; self-service add and remove nodes.
  • -
  • Availability: Service availability guaranteed with 99.99% uptime. Configurable data replication within or across regions.
  • -
  • Operations: Cockroach Labs SRE provides guaranteed uptime, optimization, security, and operations for cluster, node, and cloud instances. Backups daily and hourly.
  • -
  • Cost: Pricing based on disk size and storage. A single, predictable price packages hardware costs with SRE resources and support.
  • -
  • Resource isolation: Dedicated, single-tenant instance of CockroachDB software and infrastructure.
  • -
  • Support: Enterprise grade support provided by Cockroach Labs.
  • -
  • Advanced features: Yes. See Enterprise Features.
  • -
    -
  • All workloads: lightweight and critical production.
  • -
  • Lightweight applications, starter projects, and proofs of concept. -
  • Teams that require complete control over the database environment and deploy in their own private data centers.
  • -
  • Advanced security controls and requirements.
  • -
  • Applications that need to run in multi-cloud and hybrid cloud deployments.
  • -
  • Applications that need to run in a cloud not supported by Dedicated services.
  • -
  • Applications that require real-time integration with other systems.
  • -
    -
  • Scale: Node-based; self-service add and remove nodes.
  • -
  • Availability: Completely configurable for each deployment. Manual controls for replication of data within or across regions.
  • -
  • Operations: Self deployed and managed. Manual scaling.
  • -
  • Cost: Per hardware and infrastructure type.
  • -
  • Resource isolation: Dedicated, single-tenant instance of CockroachDB software.
  • -
  • Support: Enterprise grade support provided by Cockroach Labs.
  • -
  • Advanced features: Yes. See Enterprise Features.
  • -
- -## See also - -- [CockroachDB deployment](architecture/glossary.html#cockroachdb-deployment-terms) -- [CockroachDB pricing](https://www.cockroachlabs.com/get-started-cockroachdb/) -- [Manual Deployment](manual-deployment.html) -- [Kubernetes Deployment](kubernetes-overview.html) diff --git a/src/current/v22.1/choosing-a-multi-region-configuration.md b/src/current/v22.1/choosing-a-multi-region-configuration.md deleted file mode 100644 index 2702f29def6..00000000000 --- a/src/current/v22.1/choosing-a-multi-region-configuration.md +++ /dev/null @@ -1,68 +0,0 @@ ---- -title: How to Choose a Multi-Region Configuration -summary: Learn how to configure CockroachDB multi-region features. -toc: true -docs_area: deploy ---- - -This page has high-level information about how to configure a [multi-region cluster's](multiregion-overview.html) [survival goals](multiregion-overview.html#survival-goals) and [table localities](multiregion-overview.html#table-localities). - -{% include enterprise-feature.md %} - -## Multi-region configuration options - -The options for configuring your multi-region cluster include: - -- _Change nothing_: Using the [default settings](multiregion-overview.html#default-settings), you get: - - Zone survival (the default). - - Low-latency reads and writes from a single region. - - A choice of low-latency stale reads or high-latency fresh reads from other regions (and high-latency fresh reads is the default). - -- _Change only [survival goals](multiregion-overview.html#survival-goals)_: This configuration is useful for single-region apps that need higher levels of survival. In this configuration, you move from availability zone (AZ) survival to get: - - Region survival. - - Low-latency reads from a single region. - - A choice of low-latency stale reads or high-latency fresh reads from other regions (and high-latency fresh reads is the default). - - Higher-latency writes from all regions (due to region survival). - -- _Change only [table localities](multiregion-overview.html#table-localities)_: This is useful for multi-region apps that require different read and write latency guarantees for different tables in the database, and are not concerned with surviving a region failure. In this configuration, you get: - - Zone survival (the default). - - For [global tables](multiregion-overview.html#global-tables), low-latency reads from all regions. - - For [regional by row tables](multiregion-overview.html#regional-by-row-tables), low-latency reads and writes from each row's [home region](set-locality.html#crdb_region), and low-latency [follower reads](follower-reads.html) from all other regions. - -- _Change both [survival goals](multiregion-overview.html#survival-goals) and [table localities](multiregion-overview.html#table-localities)_: This is useful for multi-region apps that want a high level of survival. In this configuration, you move from zone survival and get: - - Region survival. - - Low-latency reads from all regions. - - Higher-latency writes from all regions (due to region survival). - -## Configuration options vs. performance characteristics and application styles - -The following table offers another view of how the various configuration options map to: - -- The performance characteristics of specific survival goal and table locality combinations. -- The types of applications that can benefit from each combination. - -|
locality ↓ survival → | `ZONE` | `REGION` | -|---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `REGIONAL BY TABLE` | Low-latency for single-region writes and multi-region stale reads. | Single-region writes are higher latency than for `ZONE`, as at least one additional region must be consulted for each write. Stale multi-region reads are of comparable latency to `ZONE` survival. | -| | Single-region apps that can accept region failure. | Single-region apps that must survive region failure. | -| `REGIONAL BY ROW` | Low-latency consistent multi-region reads and writes for rows that are homed in specific regions. | Low-latency consistent reads from a row's home region. Low-latency consistent [stale reads](follower-reads.html) from outside the row's home region. Higher-latency writes if writing to a row from outside its home region. | -| | Multi-region apps that read and write individual rows of the table from a specific region and can accept region failure. | Multi-region apps that read and write individual rows of the table from a specific region and must survive a region failure. | -| `GLOBAL` | Low-latency multi-region reads. Writes are higher latency than reads. | Low-latency multi-region reads. Writes are higher latency than reads. There should be minimal difference in write latencies between `ZONE` and `REGION` survival. | -| | Multi-region apps that need low-latency reads of a "read-mostly" table. | Multi-region apps that need low-latency reads of a "read-mostly" table and must survive a region failure. | - - -Different databases and tables within the same cluster can each use different combinations of these settings. - -{{site.data.alerts.callout_success}} -{% include {{page.version.version}}/misc/multiregion-max-offset.md %} -{{site.data.alerts.end}} - -## See also - -- [Multi-Region Capabilities Overview](multiregion-overview.html) -- [When to Use `REGIONAL` vs. `GLOBAL` Tables](when-to-use-regional-vs-global-tables.html) -- [When to Use `ZONE` vs. `REGION` Survival Goals](when-to-use-zone-vs-region-survival-goals.html) -- [Survive Region Outages with CockroachDB](https://www.cockroachlabs.com/blog/under-the-hood-multi-region/) -- [Topology Patterns](topology-patterns.html) -- [Disaster Recovery](disaster-recovery.html) -- [Low Latency Reads and Writes in a Multi-Region Cluster](demo-low-latency-multi-region-deployment.html) diff --git a/src/current/v22.1/cluster-api.md b/src/current/v22.1/cluster-api.md deleted file mode 100644 index 9c58836b0a7..00000000000 --- a/src/current/v22.1/cluster-api.md +++ /dev/null @@ -1,82 +0,0 @@ ---- -title: Cluster API v2.0 -summary: Programmatically access and monitor cluster and node status information with a RESTful API. -toc: true -docs_area: manage ---- - -The CockroachDB Cluster API is a REST API that provides information about a cluster and its nodes. The API offers programmatic access to much of the information available in the [DB Console](ui-overview.html) user interface, enabling you to monitor and troubleshoot your cluster using your choice of tooling. - -The Cluster API is hosted by all nodes of your cluster and provides information about all nodes. The API is available on the same port that is listening for HTTP connections to the DB Console. This defaults to `8080` and can be specified using `--http-addr={server}:{port}` when configuring your node. - -## Resources - -The following endpoints are available as URLs under the `/api/v2` base path (for example, `https://localhost:8080/api/v2/health/`). For more information about the support policies for endpoints, see [API Support Policy](api-support-policy.html). - -Each listed endpoint links to its full [API reference documentation](https://cockroachlabs.com/docs/api/cluster/v2.html). - -Endpoint | Name | Description | Support ---- | --- | --- | --- -[`/databases`](https://cockroachlabs.com/docs/api/cluster/v2.html#operation/listDatabases) | List databases | Get all databases in the cluster. | Stable -[`/databases/{database}`](https://cockroachlabs.com/docs/api/cluster/v2.html#operation/databaseDetails) | Get database details | Get the descriptor ID of a specified database. | Stable -[`/databases/{database}/grants`](https://cockroachlabs.com/docs/api/cluster/v2.html#operation/databaseGrants) | List database grants | List all [privileges](security-reference/authorization.html#managing-privileges) granted to users for a specified database. | Stable -[`/databases/{database}/tables`](https://cockroachlabs.com/docs/api/cluster/v2.html#operation/databaseTables) | List database tables | List all tables in a specified database. | Stable -[`/databases/{database}/tables/{table}`](https://cockroachlabs.com/docs/api/cluster/v2.html#operation/tableDetails) | Get table details | Get details on a specified table, including schema, grants, indexes, range count, and zone configuration. | Stable -[`/events`](https://cockroachlabs.com/docs/api/cluster/v2.html#operation/listEvents) | List events | List the latest [events](eventlog.html) on the cluster, in descending order. | Unstable -[`/health`](https://cockroachlabs.com/docs/api/cluster/v2.html#operation/health) | Check node health | Determine if the node is running and ready to accept SQL connections. | Stable -[`/nodes`](https://cockroachlabs.com/docs/api/cluster/v2.html#operation/listNodes) | List nodes | Get details on all nodes in the cluster, including node IDs, software versions, and hardware. | Stable -[`/nodes/{node_id}/ranges`](https://cockroachlabs.com/docs/api/cluster/v2.html#operation/listNodeRanges) | List node ranges | Get details on the ranges on a specified node. | Unstable -[`/ranges/hot`](https://cockroachlabs.com/docs/api/cluster/v2.html#operation/listHotRanges) | List hot ranges | Get information on ranges receiving a high number of reads or writes. | Stable -[`/ranges/{range_id}`](https://cockroachlabs.com/docs/api/cluster/v2.html#operation/listRange) | Get range details | Get detailed technical information on a range. Typically used by Cockroach Labs engineers. | Unstable -[`/sessions`](https://cockroachlabs.com/docs/api/cluster/v2.html#operation/listSessions) | List sessions | Get SQL session details of all current users or a specified user. | Unstable -[`/users`](https://cockroachlabs.com/docs/api/cluster/v2.html#operation/listUsers) | List users | List all SQL users on the cluster. | Stable -[`/login`](https://cockroachlabs.com/docs/api/cluster/v2.html#operation/login) | Log in | Authenticate as a [SQL role](create-role.html#create-a-role-that-can-log-in-to-the-database) that is a member of the [`admin` role](security-reference/authorization.html#admin-role) to retrieve a session token to use with further API calls. | Stable -[`/logout`](https://cockroachlabs.com/docs/api/cluster/v2.html#operation/logout) | Log out | Invalidate the session token. | Stable - -## Requirements - -All endpoints except `/health` and `/login` require authentication using a session token. To obtain a session token, you will need: - -* A [SQL role](create-role.html) that is a member of the [`admin` role](security-reference/authorization.html#admin-role) and has login permissions and a password. - -To connect with the API on a secure cluster, you will need: - -* The CA cert used by the cluster or any intermediary proxy server, either in the client's cert store as a trusted certificate authority or as a file manually specified by the HTTP request (for example, using curl's [cacert](https://curl.se/docs/manpage.html#--cacert)). - -## Authentication - -To create and manage web sessions and authentication tokens to the Cluster API from the command line, use the [`cockroach auth-session`](cockroach-auth-session.html) CLI command. - -Alternatively, you may also request a token directly from the `/login` endpoint using the following instructions: - -1. Request a session token using the `/login` endpoint. For example: - - {% include_cached copy-clipboard.html %} - ~~~ shell - curl -d "username=user&password=pass" \ - -H 'Content-Type: application/x-www-form-urlencoded' \ - https://localhost:8080/api/v2/login/ - ~~~ - -2. Record the token (`session` value) that is returned. - - {% include_cached copy-clipboard.html %} - ~~~ shell - {"session":"CIGAiPis4fj3CBIQ3u0rRQJ3tD8yIqee4hipow=="} - ~~~ - -3. Pass the token with each call using the `X-Cockroach-API-Session` header. For example: - - {% include_cached copy-clipboard.html %} - ~~~ shell - curl -H "X-Cockroach-API-Session: CIGAiPis4fj3CBIQ3u0rRQJ3tD8yIqee4hipow==" \ - https://localhost:8080/api/v2/nodes/ - ~~~ - -## Versioning and stability - -The Cluster API version is defined in the request path. For example: `/api/v2/health`. - -Future versions of CockroachDB may provide multiple API versions and will continue to provide access to this v2.0 API until it is deprecated. - -All endpoint paths and payloads will remain available within a major API version number (`v2.x`). Patch versions could add new endpoints but will not remove existing endpoints. For more information, see [API Support Policy](api-support-policy.html). diff --git a/src/current/v22.1/cluster-settings.md b/src/current/v22.1/cluster-settings.md deleted file mode 100644 index fe2c48a5f34..00000000000 --- a/src/current/v22.1/cluster-settings.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -title: Cluster Settings -summary: Learn about cluster settings that apply to all nodes of a CockroachDB cluster. -toc: false -docs_area: reference.cluster_settings ---- - -Cluster settings apply to all nodes of a CockroachDB cluster and control, for example, whether or not to share diagnostic details with Cockroach Labs as well as advanced options for debugging and cluster tuning. - -They can be updated anytime after a cluster has been started, but only by a member of the `admin` role, to which the `root` user belongs by default. - -{{site.data.alerts.callout_info}} -In contrast to cluster-wide settings, node-level settings apply to a single node. They are defined by flags passed to the `cockroach start` command when starting a node and cannot be changed without stopping and restarting the node. For more details, see [Start a Node](cockroach-start.html). -{{site.data.alerts.end}} - -## Settings - -{{site.data.alerts.callout_danger}} -These cluster settings have a broad impact on CockroachDB internals and affect all applications, workloads, and users running on a CockroachDB cluster. For some settings, a [session setting](set-vars.html#supported-variables) could be a more appropriate scope. -{{site.data.alerts.end}} - -{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/{{ page.release_info.crdb_branch_name }}/docs/generated/settings/settings.html %} - -## View current cluster settings - -Use the [`SHOW CLUSTER SETTING`](show-cluster-setting.html) statement. - -## Change a cluster setting - -Use the [`SET CLUSTER SETTING`](set-cluster-setting.html) statement. - -Before changing a cluster setting, please note the following: - -- Changing a cluster setting is not instantaneous, as the change must be propagated to other nodes in the cluster. - -- Do not change cluster settings while [upgrading to a new version of CockroachDB](upgrade-cockroach-version.html). Wait until all nodes have been upgraded before you make the change. - -## See also - -- [`SET CLUSTER SETTING`](set-cluster-setting.html) -- [`SHOW CLUSTER SETTING`](show-cluster-setting.html) -- [Diagnostics Reporting](diagnostics-reporting.html) -- [Start a Node](cockroach-start.html) -- [Use the Built-in SQL Client](cockroach-sql.html) diff --git a/src/current/v22.1/cluster-setup-troubleshooting.md b/src/current/v22.1/cluster-setup-troubleshooting.md deleted file mode 100644 index e8c3e94aae8..00000000000 --- a/src/current/v22.1/cluster-setup-troubleshooting.md +++ /dev/null @@ -1,610 +0,0 @@ ---- -title: Troubleshoot Cluster Setup -summary: Learn how to troubleshoot issues with starting CockroachDB clusters -toc: true -docs_area: manage ---- - -If you're having trouble starting or scaling your cluster, this page will help you troubleshoot the issue. - -To use this guide, it's important to understand some of CockroachDB's terminology: - - - A **cluster** acts as a single logical database, but is actually made up of many cooperating nodes. - - **Nodes** are single instances of the `cockroach` binary running on a machine. It's possible (though atypical) to have multiple nodes running on a single machine. - -## Cannot run a single-node CockroachDB cluster - -Try running: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start-single-node --insecure -~~~ - -If the process exits prematurely, check for the following: - -#### An existing storage directory - -When starting a node, the directory you choose to store the data in also contains metadata identifying the cluster the data came from. This causes conflicts when you've already started a node on the server, have quit `cockroach`, and then tried to start another cluster using the same directory. Because the existing directory's cluster ID doesn't match the new cluster ID, the node cannot start. - -**Solution:** Disassociate the node from the existing directory where you've stored CockroachDB data. For example, you can do either of the following: - -- Choose a different directory to store the CockroachDB data: - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start-single-node --store= --insecure - ~~~ -- Remove the existing directory and start the node again: - {% include_cached copy-clipboard.html %} - ~~~ shell - $ rm -r cockroach-data/ - ~~~ - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start-single-node --insecure - ~~~ - -#### Toolchain incompatibility - -The components of the toolchain might have some incompatibilities that need to be resolved. For example, a few months ago, there was an incompatibility between Xcode 8.3 and Go 1.8 that caused any Go binaries created with that toolchain combination to crash immediately. - -#### Incompatible CPU - -If the `cockroach` process had exit status `132 (SIGILL)`, it attempted to use an instruction that is not supported by your CPU. Non-release builds of CockroachDB may not be able to run on older hardware platforms than the one used to build them. Release builds should run on any x86-64 CPU. - -#### Default ports already in use - -Other services may be running on port 26257 or 8080 (CockroachDB's default `--listen-addr` port and `--http-addr` port respectively). You can either stop those services or start your node with different ports, specified in the [`--listen-addr` and `--http-addr` flags](cockroach-start.html#networking). - - If you change the port, you will need to include the `--port=` flag in each subsequent cockroach command or change the `COCKROACH_PORT` environment variable. - -#### Single-node networking issues - -Networking issues might prevent the node from communicating with itself on its hostname. You can control the hostname CockroachDB uses with the [`--listen-addr` flag](cockroach-start.html#networking). - - If you change the host, you will need to include `--host=` in each subsequent cockroach command. - -#### CockroachDB process hangs when trying to start a node in the background - -See [Why is my process hanging when I try to start it in the background?](operational-faqs.html#why-is-my-process-hanging-when-i-try-to-start-nodes-with-the-background-flag) - -## Cannot run SQL statements using built-in SQL client - -If the CockroachDB node appeared to [start successfully](start-a-local-cluster.html), in a separate terminal run: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e "show databases" -~~~ - -You should see a list of the built-in databases: - -~~~ - database_name -+---------------+ - defaultdb - postgres - system -(3 rows) -~~~ - -If you’re not seeing the output above, check for the following: - -- `connection refused` error, which indicates you have not included some flag that you used to start the node. We have additional troubleshooting steps for this error [here](common-errors.html#connection-refused). -- The node crashed. To ascertain if the node crashed, run `ps | grep cockroach` to look for the `cockroach` process. If you cannot locate the `cockroach` process (i.e., it crashed), [file an issue](file-an-issue.html), including the [logs from your node](configure-logs.html#logging-directory) and any errors you received. - -## Cannot run a multi-node CockroachDB cluster on the same machine - -{{site.data.alerts.callout_info}} -Running multiple nodes on a single host is useful for testing CockroachDB, but it's not recommended for production deployments. To run a physically distributed cluster in production, see [Manual Deployment](manual-deployment.html) or [Kubernetes Overview](kubernetes-overview.html). Also be sure to review the [Production Checklist](recommended-production-settings.html). -{{site.data.alerts.end}} - -If you are trying to run all nodes on the same machine, you might get the following errors: - -#### Store directory already exists - -~~~ -ERROR: could not cleanup temporary directories from record file: could not lock temporary directory /Users/amruta/go/src/github.com/cockroachdb/cockroach/cockroach-data/cockroach-temp301343769, may still be in use: IO error: While lock file: /Users/amruta/go/src/github.com/cockroachdb/cockroach/cockroach-data/cockroach-temp301343769/TEMP_DIR.LOCK: Resource temporarily unavailable -~~~ - -**Explanation:** When starting a new node on the same machine, the directory you choose to store the data in also contains metadata identifying the cluster the data came from. This causes conflicts when you've already started a node on the server and then tried to start another cluster using the same directory. - -**Solution:** Choose a different directory to store the CockroachDB data. - -#### Port already in use - -~~~ -ERROR: cockroach server exited with error: consider changing the port via --listen-addr: listen tcp 127.0.0.1:26257: bind: address already in use -~~~ - -**Solution:** Change the `--port`, `--http-port` flags for each new node that you want to run on the same machine. - -## Scaling issues - -#### Cannot join a node to an existing CockroachDB cluster - -###### Store directory already exists - -When joining a node to a cluster, you might receive one of the following errors: - -~~~ -no resolvers found; use --join to specify a connected node - -node belongs to cluster {"cluster hash"} but is attempting to connect to a gossip network for cluster {"another cluster hash"} -~~~ - -**Explanation:** When starting a node, the directory you choose to store the data in also contains metadata identifying the cluster the data came from. This causes conflicts when you've already started a node on the server, have quit the `cockroach` process, and then tried to join another cluster. Because the existing directory's cluster ID doesn't match the new cluster ID, the node cannot join it. - -**Solution:** Disassociate the node from the existing directory where you've stored CockroachDB data. For example, you can do either of the following: - -- Choose a different directory to store the CockroachDB data: - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start --store= --join= - ~~~ -- Remove the existing directory and start a node joining the cluster again: - {% include_cached copy-clipboard.html %} - ~~~ shell - $ rm -r cockroach-data/ - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start --join=:26257 - ~~~ - -###### Incorrect `--join` address - -If you try to add another node to the cluster, but the `--join` address is not pointing at any of the existing nodes, then the process will never complete, and you'll see a continuous stream of warnings like this: - -~~~ -W180817 17:01:56.506968 886 vendor/google.golang.org/grpc/clientconn.go:942 Failed to dial localhost:20000: grpc: the connection is closing; please retry. -W180817 17:01:56.510430 914 vendor/google.golang.org/grpc/clientconn.go:1293 grpc: addrConn.createTransport failed to connect to {localhost:20000 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:20000: connect: connection refused". Reconnecting… -~~~ - -**Explanation:** These warnings tell you that the node cannot establish a connection with the address specified in the `--join` flag. Without a connection to the cluster, the node cannot join. - -**Solution:** To successfully join the node to the cluster, start the node again, but this time include a correct `--join` address. - -#### Performance is degraded when adding nodes - -###### Excessive snapshot rebalance and recovery rates - -The `kv.snapshot_rebalance.max_rate` and `kv.snapshot_recovery.max_rate` [cluster settings](cluster-settings.html) set the rate limits at which [snapshots](architecture/replication-layer.html#snapshots) are sent to nodes. These settings can be temporarily increased to expedite replication during an outage or when scaling a cluster up or down. - -However, if the settings are too high when nodes are added to the cluster, this can cause degraded performance and node crashes. We recommend **not** increasing these values by more than 2 times their [default values](cluster-settings.html) without explicit approval from Cockroach Labs. - -**Explanation:** If `kv.snapshot_rebalance.max_rate` and `kv.snapshot_recovery.max_rate` are set too high for the cluster during scaling, this can cause nodes to experience ingestions faster than compactions can keep up, and result in an [inverted LSM](architecture/storage-layer.html#inverted-lsms). - -**Solution:** [Check LSM health](common-issues-to-monitor.html#lsm-health). {% include {{ page.version.version }}/prod-deployment/resolution-inverted-lsm.md %} - -After compaction has completed, lower `kv.snapshot_rebalance.max_rate` and `kv.snapshot_recovery.max_rate` to their [default values](cluster-settings.html). As you add nodes to the cluster, slowly increase both cluster settings, if desired. This will control the rate of new ingestions for newly added nodes. Meanwhile, monitor the cluster for unhealthy increases in [IOPS](common-issues-to-monitor.html#disk-iops) and [CPU](common-issues-to-monitor.html#cpu). - -Outside of performing cluster maintenance, return `kv.snapshot_rebalance.max_rate` and `kv.snapshot_recovery.max_rate` to their [default values](cluster-settings.html). - -{% include_cached copy-clipboard.html %} -~~~ sql -RESET CLUSTER SETTING kv.snapshot_rebalance.max_rate; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -RESET CLUSTER SETTING kv.snapshot_recovery.max_rate; -~~~ - -## Client connection issues - -If a client cannot connect to the cluster, check basic network connectivity (`ping`), port connectivity (`telnet`), and certificate validity. - -#### Networking issues - -Most networking-related issues are caused by one of two issues: - -- Firewall rules, which require your network administrator to investigate -- Inaccessible hostnames on your nodes, which can be controlled with the `--listen-addr` and `--advertise-addr` flags on [`cockroach start`](cockroach-start.html#networking) - - -**Solution:** - -To check your networking setup: - -1. Use `ping`. Every machine you are using as a CockroachDB node should be able to ping every other machine, using the hostnames or IP addresses used in the `--join` flags (and the `--advertise-host` flag if you are using it). - -2. If the machines are all pingable, check if you can connect to the appropriate ports. With your CockroachDB nodes still running, log in to each node and use `telnet` or` nc` to verify machine to machine connectivity on the desired port. For instance, if you are running CockroachDB on the default port of 26257, run either: - - `telnet 26257` - - `nc 26257` - - Both `telnet` and `nc` will exit immediately if a connection cannot be established. If you are running in a firewalled environment, the firewall might be blocking traffic to the desired ports even though it is letting ping packets through. - -To efficiently troubleshoot the issue, it's important to understand where and why it's occurring. We recommend checking the following network-related issues: - -- By default, CockroachDB advertises itself to other nodes using its hostname. If your environment doesn't support DNS or the hostname is not resolvable, your nodes cannot connect to one another. In these cases, you can: - - Change the hostname each node uses to advertises itself with `--advertise-addr` - - Set `--listen-addr=` if the IP is a valid interface on the machine -- Every node in the cluster should be able to ping each other node on the hostnames or IP addresses you use in the `--join`, `--listen-addr`, or `--advertise-addr` flags. -- Every node should be able to connect to other nodes on the port you're using for CockroachDB (26257 by default) through `telnet` or `nc`: - - `telnet 26257` - - `nc 26257` - -Again, firewalls or hostname issues can cause any of these steps to fail. - -#### Network partition - -If the DB Console lists any dead nodes on the [**Cluster Overview** page](ui-cluster-overview-page.html), then you might have a network partition. - -**Explanation:** A network partition prevents nodes from communicating with each other in one or both directions. This can be due to a configuration problem with the network, such as when allowlisted IP addresses or hostnames change after a node is torn down and rebuilt. In a symmetric partition, node communication is broken in both directions. In an asymmetric partition, node communication works in one direction but not the other. - -The effect of a network partition depends on which nodes are partitioned, where the ranges are located, and to a large extent, whether [localities](cockroach-start.html#locality) are defined. If localities are not defined, a partition that cuts off at least (n-1)/2 nodes will cause data unavailability. - -**Solution:** - -To identify a network partition: - -1. Access the [Network Latency](ui-network-latency-page.html) page of the DB Console. -2. In the **Latencies** table, check for nodes with [no connections](ui-network-latency-page.html#no-connections). This indicates that a node cannot communicate with another node, and might indicate a network partition. - -## Authentication issues - -#### Missing certificate - -If you try to add a node to a secure cluster without providing the node's security certificate, you will get the following error: - -~~~ -problem with CA certificate: not found -* -* ERROR: cannot load certificates. -* Check your certificate settings, set --certs-dir, or use --insecure for insecure clusters. -* -* problem with CA certificate: not found -* -Failed running "start" -~~~ - -**Explanation:** The error tells you that because the cluster is secure, it requires the new node to provide its security certificate in order to join. - -**Solution:** To successfully join the node to the cluster, start the node again, but this time include the `--certs-dir` flag - -#### Certification expiration - -If you’re running a secure cluster, be sure to monitor your certificate expiration. If one of the inter-node certificates expires, nodes will no longer be able to communicate which can look like a network partition. - -To check the certificate expiration date: - -1. [Access the DB Console](ui-overview.html#db-console-access). -2. Click the gear icon on the left-hand navigation bar to access the **Advanced Debugging** page. -3. Scroll down to the **Even More Advanced Debugging** section. Click **All Nodes**. The **Node Diagnostics** page appears. Click the certificates for each node and check the expiration date for each certificate in the Valid Until field. - -#### Client password not set - -While connecting to a secure cluster as a user, CockroachDB first checks if the client certificate exists in the `cert` directory. If the client certificate doesn’t exist, it prompts for a password. If password is not set and you press Enter, the connection attempt fails, and the following error is printed to `stderr`: - -~~~ -Error: pq: invalid password -Failed running "sql" -~~~ - -**Solution:** To successfully connect to the cluster, you must first either generate a client certificate or create a password for the user. - -#### Cannot create new connections to cluster for up to 40 seconds after a node dies - -When a node [dies abruptly and/or loses its network connection to the cluster](#node-liveness-issues), the following behavior can occur: - -1. For a period of up to 40 seconds, clients trying to connect with [username and password authentication](authentication.html#client-authentication) cannot create new connections to any of the remaining nodes in the cluster. -1. Applications start timing out when trying to connect to the cluster during this window. - -The reason this happens is as follows: - -- Username and password information is stored in a system range. -- Since all system ranges are located [near the beginning of the keyspace](architecture/distribution-layer.html#monolithic-sorted-map-structure), the system range containing the username/password info can sometimes be colocated with another system range that is used to determine [node liveness](#node-liveness-issues). -- If the username/password info and the node liveness record are stored together as described above, it can take extra time for the lease on this range to be transferred to another node. Normally, lease transfers take about 10 seconds, but in this case it may require multiple rounds of consensus to determine that the node in question is actually dead (the node liveness record check may be retried several times before failing). - -For more information about how lease transfers work when a node dies, see [How leases are transferred from a dead node](architecture/replication-layer.html#how-leases-are-transferred-from-a-dead-node). - -The solution is to add connection retry logic to your application. - -## Clock sync issues - -#### Node clocks are not properly synchronized - -See the following FAQs: - -- [What happens when node clocks are not properly synchronized](operational-faqs.html#what-happens-when-node-clocks-are-not-properly-synchronized) -- [How can I tell how well node clocks are synchronized](operational-faqs.html#how-can-i-tell-how-well-node-clocks-are-synchronized) - -## Capacity planning issues - -You may encounter the following issues when your cluster nears 100% resource capacity: - -- Running CPU at close to 100% utilization with high run queue will result in poor performance. -- Running RAM at close to 100% utilization triggers Linux [OOM](#out-of-memory-oom-crash) and/or swapping that will result in poor performance or stability issues. -- Running storage at 100% capacity causes writes to fail, which in turn can cause various processes to stop. -- Running storage at 100% utilization read/write causes poor service time and [node shutdown](operational-faqs.html#what-happens-when-a-node-runs-out-of-disk-space). -- Running network at 100% utilization causes response between databases and client to be poor. - -**Solution:** [Access the DB Console](ui-overview.html#db-console-access) and navigate to **Metrics > Hardware** dashboard to monitor the following metrics: - -Check that adequate capacity was available for the incident: - -Type | Time Series | What to look for ---------|--------|--------| -RAM capacity | Memory Usage | Any non-zero value -CPU capacity | CPU Percent | Consistent non-zero values -Disk capacity | Available Disk Capacity | Any non-zero value -Disk I/O | Disk Ops In Progress | Zero or occasional single-digit values -Network capacity | Network Bytes Received
Network Bytes Sent | Any non-zero value - -{{site.data.alerts.callout_info}} -For minimum provisioning guidelines, see [Basic hardware recommendations](recommended-production-settings.html#basic-hardware-recommendations). -{{site.data.alerts.end}} - -Check for resources that are running out of capacity: - -Type | Time Series | What to look for ---------|--------|--------| -RAM capacity | Memory Usage | Consistently more than 80% -CPU capacity | CPU Percent | Consistently less than 20% in idle (i.e., 80% busy) -Disk capacity | Available Disk Capacity | Consistently less than 20% of the [store](cockroach-start.html#store) size -Disk I/O | Disk Ops In Progress | Consistent double-digit values -Network capacity | Network Bytes Received
Network Bytes Sent | Consistently more than 50% capacity for both - -## Storage issues - -#### Disks filling up - -Like any database system, if you run out of disk space the system will no longer be able to accept writes. Additionally, a CockroachDB node needs a small amount of disk space (a few GiBs to be safe) to perform basic maintenance functionality. For more information about this issue, see: - -- [What happens when a node runs out of disk space?](operational-faqs.html#what-happens-when-a-node-runs-out-of-disk-space) -- [Why is memory usage increasing despite lack of traffic?](operational-faqs.html#why-is-memory-usage-increasing-despite-lack-of-traffic) -- [Why is disk usage increasing despite lack of writes?](operational-faqs.html#why-is-disk-usage-increasing-despite-lack-of-writes) -- [Can I reduce or disable the storage of timeseries data?](operational-faqs.html#can-i-reduce-or-disable-the-storage-of-time-series-data) - -###### Automatic ballast files - - CockroachDB automatically creates an emergency ballast file at [node startup](cockroach-start.html). This feature is **on** by default. Note that the [`cockroach debug ballast`](cockroach-debug-ballast.html) command is still available but deprecated. - -The ballast file defaults to 1% of total disk capacity or 1 GiB, whichever is smaller. The size of the ballast file may be configured using [the `--store` flag to `cockroach start`](cockroach-start.html#flags-store) with a [`ballast-size` field](cockroach-start.html#fields-ballast-size); this field accepts the same value formats as the `size` field. - -In order for the ballast file to be automatically created, the following conditions must be met: - -- Available disk space is at least four times the configured ballast file size. -- Available disk space on the store after creating the ballast file is at least 10 GiB. - -During node startup, if available disk space on at least one store is less than or equal to half the ballast file size, the process will exit immediately with the exit code 10, signifying 'Disk Full'. - -To allow the node to start, you can manually remove the `EMERGENCY_BALLAST` file, which is located in the store's `cockroach-data/auxiliary` directory as shown below: - -~~~ -cockroach-data -├── ... -├── auxiliary -│ └── EMERGENCY_BALLAST -... -~~~ - -Removing the ballast file will give you a chance to remedy the disk space exhaustion; it will automatically be recreated when there is sufficient disk space. - -{{site.data.alerts.callout_info}} -Different filesystems may treat the ballast file differently. Make sure to test that the file exists, and that space for the file is actually being reserved by the filesystem. For a list of supported filesystems, see the [Production Checklist](recommended-production-settings.html#storage). -{{site.data.alerts.end}} - -#### Disk stalls - -A _disk stall_ is any disk operation that does not terminate in a reasonable amount of time. This usually manifests as write-related system calls such as [`fsync(2)`](https://man7.org/linux/man-pages/man2/fdatasync.2.html) (aka `fdatasync`) taking a lot longer than expected (e.g., more than 60 seconds). The mitigation in almost all cases is to [restart the node](cockroach-start.html) with the stalled disk. CockroachDB's internal disk stall monitoring will attempt to shut down a node when it sees a disk stall that lasts longer than 60 seconds. At that point the node should be restarted by your [orchestration system](recommended-production-settings.html#orchestration-kubernetes). - -Symptoms of disk stalls include: - -- Bad cluster write performance, usually in the form of a substantial drop in QPS for a given workload. -- [Node liveness issues](#node-liveness-issues). -- Writes on one node come to a halt. This can happen because in rare cases, a node may be able to perform liveness checks (which involve writing to disk) even though it cannot write other data to disk due to one or more slow/stalled calls to `fsync`. Because the node is passing its liveness checks, it is able to hang onto its leases even though it cannot make progress on the ranges for which it is the leaseholder. This wedged node has a ripple effect on the rest of the cluster such that all processing of the ranges whose leaseholders are on that node basically grinds to a halt. As mentioned above, CockroachDB's disk stall detection will attempt to shut down the node when it detects this state. - -Causes of disk stalls include: - -- Disk operations have slowed due to underprovisioned IOPS. Make sure you are deploying with our [recommended production settings for storage](recommended-production-settings.html#storage) and [monitoring disk IOPS](common-issues-to-monitor.html#disk-iops). -- Actual hardware-level storage issues that result in slow `fsync` performance. -- In rare cases, operating-system-level configuration of subsystems such as SELinux can slow down system calls such as `fsync` enough to affect storage engine performance. - -CockroachDB's built-in disk stall detection works as follows: - -- Every 10 seconds, the CockroachDB storage engine checks the [_write-ahead log_](https://en.wikipedia.org/wiki/Write-ahead_logging), or _WAL_. If data has not been synced to disk (via `fsync`) within that interval, the log message `disk stall detected: unable to write to %s within %s %s warning log entry` is written to the [`STORAGE` logging channel](logging.html#storage). If this state continues for 20 seconds or more (configurable with the `COCKROACH_ENGINE_MAX_SYNC_DURATION` environment variable), the `cockroach` process is terminated. - -- Every time the storage engine writes to the main [`cockroach.log` file](logging.html#dev), the engine waits 30 seconds for the write to succeed (configurable with the `COCKROACH_LOG_MAX_SYNC_DURATION` environment variable). If the write to the log fails, the `cockroach` process is terminated and the following message is written to stderr / `cockroach.log`: - - - `disk stall detected: unable to sync log files within %s` - -- During [node liveness heartbeats](#node-liveness-issues), the [storage engine](architecture/storage-layer.html) writes to disk as part of the node liveness heartbeat process. - -## CPU issues - -#### CPU is insufficient for the workload - -Issues with CPU most commonly arise when there is insufficient CPU to support the scale of the workload. If the concurrency of your workload significantly exceeds your provisioned CPU, you will encounter a [degradation in SQL response time](common-issues-to-monitor.html#service-latency). This is the most common symptom of CPU starvation. - -Because compaction requires significant CPU to run concurrent worker threads, a lack of CPU resources will eventually cause compaction to fall behind. This leads to [read amplification](architecture/storage-layer.html#read-amplification) and inversion of the log-structured merge (LSM) trees on the [storage layer](architecture/storage-layer.html). - -If these issues remain unresolved, affected nodes will miss their liveness heartbeats, causing the cluster to lose nodes and eventually become unresponsive. - -**Solution:** To diagnose and resolve an excessive workload concurrency issue: - -- [Check for high CPU usage.](common-issues-to-monitor.html#cpu-usage) - -- [Check your workload concurrency](common-issues-to-monitor.html#workload-concurrency) and compare it to your provisioned CPU. - - - {% include {{ page.version.version }}/prod-deployment/resolution-excessive-concurrency.md %} - -- [Check LSM health](common-issues-to-monitor.html#lsm-health), which can be affected over time by CPU starvation. - - - {% include {{ page.version.version }}/prod-deployment/resolution-inverted-lsm.md %} - -## Memory issues - -#### Suspected memory leak - -A CockroachDB node will grow to consume all of the memory allocated for its `--cache`, [even if your cluster is idle](operational-faqs.html#why-is-memory-usage-increasing-despite-lack-of-traffic). The default cache size is 25% of physical memory, which can be substantial, depending on your machine configuration. For more information, see [Cache and SQL memory size](recommended-production-settings.html#cache-and-sql-memory-size). - -CockroachDB memory usage has the following components: - -- **Go allocated memory**: Memory allocated by the Go runtime to support query processing and various caches maintained in Go by CockroachDB. -- **CGo allocated memory**: Memory allocated by the C/C++ libraries linked into CockroachDB and primarily concerns the block caches for the [Pebble storage engine](cockroach-start.html#storage-engine)). This is the allocation specified with `--cache`. The size of CGo allocated memory is usually very close to the configured `--cache` size. -- **Overhead**: The RSS (resident set size) minus Go/CGo allocated memory. - -**Solution:** To determine Go and CGo allocated memory: - -1. [Access the DB Console](ui-overview.html#db-console-access). - -1. Navigate to **Metrics > Runtime** dashboard, and check the **Memory Usage** graph. - -1. On hovering over the graph, the values for the following metrics are displayed: - - Metric | Description - --------|---- - RSS | Total memory in use by CockroachDB. - Go Allocated | Memory allocated by the Go layer. - Go Total | Total memory managed by the Go layer. - CGo Allocated | Memory allocated by the C layer. - CGo Total | Total memory managed by the C layer. - - {% include {{ page.version.version }}/prod-deployment/healthy-crdb-memory.md %} - - If you observe any of the following, [file an issue](file-an-issue.html): - - CGo Allocated is larger than the configured `--cache` size. - - RSS minus Go Total and CGo Total is larger than 100 MiB. - - Go Total or CGo Total fluctuates or grows steadily over time. - -#### Out-of-memory (OOM) crash - -When a node exits without logging an error message, the operating system has likely stopped the node due to insufficient memory. - -CockroachDB attempts to restart nodes after they crash. Nodes that frequently restart following an abrupt process exit may point to an underlying memory issue. - -**Solution:** If you [observe nodes restarting after sudden crashes](common-issues-to-monitor.html#node-process-restarts): - -- [Confirm that the node restarts are caused by OOM crashes.](common-issues-to-monitor.html#verify-oom-errors) - - - {% include {{ page.version.version }}/prod-deployment/resolution-oom-crash.md %} - -- [Check whether SQL queries may be responsible.](common-issues-to-monitor.html#sql-memory-usage) - - -## Decommissioning issues - -#### Decommissioning process hangs indefinitely - -If the [decommissioning process](node-shutdown.html?filters=decommission#remove-nodes) appears to be hung on a node, a message like the following will print to `stderr`: - -~~~ -possible decommission stall detected -n3 still has replica id 2 for range r1 -n3 still has replica id 3 for range r2 -n3 still has replica id 2 for range r3 -n3 still has replica id 3 for range r4 -n3 still has replica id 2 for range r5 -~~~ - -**Explanation:** Before decommissioning a node, you need to make sure other nodes are available to take over the range replicas from the node. If no other nodes are available, the decommission process will hang indefinitely. For more information, see [Node Shutdown](node-shutdown.html?filters=decommission#size-and-replication-factor). - -**Solution:** Confirm that there are enough nodes with sufficient storage space to take over the replicas from the node you want to remove. - -## Replication issues - -#### DB Console shows under-replicated/unavailable ranges - -When a CockroachDB node dies (or is partitioned) the under-replicated range count will briefly spike while the system recovers. - -**Explanation:** CockroachDB uses consensus replication and requires a quorum of the replicas to be available in order to allow both writes and reads to the range. The number of failures that can be tolerated is equal to (Replication factor - 1)/2. Thus CockroachDB requires (n-1)/2 nodes to achieve quorum. For example, with 3x replication, one failure can be tolerated; with 5x replication, two failures, and so on. - -- Under-replicated Ranges: When a cluster is first initialized, the few default starting ranges have a single replica. As more nodes become available, the cluster replicates these ranges to other nodes until the number of replicas for each range reaches the desired [replication factor](configure-replication-zones.html#num_replicas) (3 by default). If a range has fewer replicas than the replication factor, the range is said to be "under-replicated". [Non-voting replicas](architecture/replication-layer.html#non-voting-replicas), if configured, are not counted when calculating replication status. - -- Unavailable Ranges: If a majority of a range's replicas are on nodes that are unavailable, then the entire range is unavailable and will be unable to process queries. - -**Solution:** - -To identify under-replicated/unavailable ranges: - -1. [Access the DB Console](ui-overview.html#db-console-access). - -2. On the **Cluster Overview** page, check the **Replication Status**. If the **Under-replicated ranges** or **Unavailable ranges** count is non-zero, then you have under-replicated or unavailable ranges in your cluster. - -3. Check for a network partition: Click the gear icon on the left-hand navigation bar to access the **Advanced Debugging** page. On the Advanced Debugging page, click **Network Latency**. In the **Latencies** table, check if any cells are marked as "X". If yes, it indicates that the nodes cannot communicate with those nodes, and might indicate a network partition. If there's no partition, and there's still no upreplication after 5 mins, then [file an issue](file-an-issue.html). - -**Add nodes to the cluster:** - -On the DB Console’s Cluster Overview page, check if any nodes are down. If the number of nodes down is less than (n-1)/2, then that is most probably the cause of the under-replicated/unavailable ranges. Add nodes to the cluster such that the cluster has the required number of nodes to replicate ranges properly. - -If you still see under-replicated/unavailable ranges on the Cluster Overview page, investigate further: - -1. [Access the DB Console](ui-overview.html#db-console-access) -2. Click the gear icon on the left-hand navigation bar to access the **Advanced Debugging** page. -2. Click **Problem Ranges**. -3. In the **Connections** table, identify the node with the under-replicated/unavailable ranges and click the node ID in the Node column. -4. To view the **Range Report** for a range, click on the range number in the **Under-replicated (or slow)** table or **Unavailable** table. -5. On the Range Report page, scroll down to the **Simulated Allocator Output** section. The table contains an error message which explains the reason for the under-replicated range. Follow the guidance in the message to resolve the issue. If you need help understanding the error or the guidance, [file an issue](file-an-issue.html). Please be sure to include the full Range Report and error message when you submit the issue. - -## Node liveness issues - -"Node liveness" refers to whether a node in your cluster has been determined to be "dead" or "alive" by the rest of the cluster. This is achieved using checks that ensure that each node connected to the cluster is updating its liveness record. This information is shared with the rest of the cluster using an internal gossip protocol. - -Common reasons for node liveness issues include: - -- Heavy I/O load on the node. Because each node needs to update a liveness record on disk, maxing out disk bandwidth can cause liveness heartbeats to be missed. See also: [Capacity planning issues](#capacity-planning-issues). -- A [disk stall](#disk-stalls). This will cause node liveness issues for the same reasons as listed above. -- [Insufficient CPU for the workload](#cpu-is-insufficient-for-the-workload). This can eventually cause nodes to miss their liveness heartbeats and become unresponsive. -- [Networking issues](#networking-issues) with the node. - -The [DB Console][db_console] provides several ways to check for node liveness issues in your cluster: - -- [Check node heartbeat latency](common-issues-to-monitor.html#node-heartbeat-latency) -- [Check command commit latency](common-issues-to-monitor.html#command-commit-latency) - -{{site.data.alerts.callout_info}} -For more information about how node liveness works, see [Replication Layer](architecture/replication-layer.html#epoch-based-leases-table-data). -{{site.data.alerts.end}} - -#### Impact of node failure is greater than 10 seconds - -When the cluster needs to access a range on a leaseholder node that is dead, that range's [lease must be transferred to a healthy node](architecture/replication-layer.html#how-leases-are-transferred-from-a-dead-node). In theory, this process should take no more than 9 seconds for liveness expiration plus the cost of several network roundtrips. - -In production, lease transfer upon node failure can take longer than expected. In {{ page.version.version }}, this is observed in the following scenarios: - -- **The leaseholder node for the liveness range fails.** The liveness range is a system range that [stores the liveness record](architecture/replication-layer.html#epoch-based-leases-table-data) for each node on the cluster. If a node fails and is also the leaseholder for the liveness range, operations cannot proceed until the liveness range is transferred to a new leaseholder and the liveness record is made available to other nodes. This can cause momentary cluster unavailability. - -- **Network or DNS issues cause connection issues between nodes.** If there is no live server for the IP address or DNS lookup, connection attempts to a node will not return an immediate error, but will hang until timing out. This can cause unavailability and prevent a speedy movement of leases and recovery. CockroachDB avoids contacting unresponsive nodes or DNS during certain performance-critical operations, and the connection issue should generally resolve in 10-30 seconds. However, an attempt to contact an unresponsive node could still occur in other scenarios that are not yet addressed. - -- **A node's disk stalls.** A [disk stall](#disk-stalls) on a node can cause write operations to stall indefinitely, also causes the node's heartbeats to fail since the storage engine cannot write to disk as part of the heartbeat, and may cause read requests to fail if they are waiting for a conflicting write to complete. Lease acquisition from this node can stall indefinitely until the node is shut down or recovered. Pebble detects most stalls and will terminate the `cockroach` process after 20 seconds, but there are gaps in its detection. In **v22.1.2 and later**, each lease acquisition attempt on an unresponsive node times out after 6 seconds. However, CockroachDB can still appear to stall as these timeouts are occurring. - -- **Otherwise unresponsive nodes.** Internal deadlock due to faulty code, resource exhaustion, OS/hardware issues, and other arbitrary failures can make a node unresponsive. This can cause leases to become stuck in certain cases, such as when a response from the previous leaseholder is needed in order to move the lease. - -**Solution:** If you are experiencing intermittent network or connectivity issues, first [shut down the affected nodes](node-shutdown.html) temporarily so that nodes phasing in and out do not cause disruption. - -If a node has become unresponsive without returning an error, [shut down the node](node-shutdown.html) so that network requests immediately become hard errors rather than stalling. - -If you are running a version of CockroachDB that is affected by an issue described here, upgrade to a version that contains the fix for the issue, as described in the preceding list. - -## Partial availability issues - -If your cluster is in a partially-available state due to a recent node or network failure, the internal logging table `system.eventlog` might be unavailable. This can cause the logging of [notable events](eventlog.html) (e.g., the execution of SQL statements) to the `system.eventlog` table to fail to complete, contributing to cluster unavailability. If this occurs, you can set the [cluster setting](cluster-settings.html) `server.eventlog.enabled` to `false` to disable writing notable log events to this table, which may help to recover your cluster. - -Even with `server.eventlog.enabled` set to `false`, notable log events are still sent to configured [log sinks](configure-logs.html#configure-log-sinks) as usual. - -## Check for under-replicated or unavailable data - -To see if any data is under-replicated or unavailable in your cluster, use the `system.replication_stats` report as described in [Replication Reports](query-replication-reports.html). - -## Check for replication zone constraint violations - -To see if any of your cluster's [data placement constraints](configure-replication-zones.html#replication-constraints) are being violated, use the `system.replication_constraint_stats` report as described in [Replication Reports](query-replication-reports.html). - -## Check for critical localities - -To see which of your [localities](cockroach-start.html#locality) (if any) are critical, use the `system.replication_critical_localities` report as described in [Replication Reports](query-replication-reports.html). A locality is "critical" for a range if all of the nodes in that locality becoming [unreachable](#node-liveness-issues) would cause the range to become unavailable. In other words, the locality contains a majority of the range's replicas. - -## Something else? - -If we do not have a solution here, you can try using our other [support resources](support-resources.html), including: - -- [StackOverflow](http://stackoverflow.com/questions/tagged/cockroachdb) -- [CockroachDB Community Forum](https://forum.cockroachlabs.com) -- [Chatting with our developers on Slack](https://cockroachdb.slack.com) - - - -[db_console]: ui-overview.html diff --git a/src/current/v22.1/cockroach-auth-session.md b/src/current/v22.1/cockroach-auth-session.md deleted file mode 100644 index 87b587bfde0..00000000000 --- a/src/current/v22.1/cockroach-auth-session.md +++ /dev/null @@ -1,210 +0,0 @@ ---- -title: cockroach auth-session -summary: To create and manage web sessions and authentication tokens to the HTTP interface from the command line, use the cockroach auth-session command. -toc: true -docs_area: reference.cli ---- - -To create and manage web sessions and authentication tokens to the HTTP interface from the command line, use the `cockroach auth-session` [command](cockroach-commands.html) with the appropriate subcommands and flags. - -## Subcommands - -Subcommand | Usage ------------|------ -`login` | Authenticate a user against a running cluster's HTTP interface, generating an HTTP authentication token (a "cookie") which can also be used by non-interactive HTTP-based database management tools. Must be used with a valid, existing user. May be used to generate a cookie for the `root` user. -`logout` | Revokes all previously-issued HTTP authentication tokens for the given user. -`list` | List all authenticated sessions to the HTTP interface, including currently active and recently expired sessions. - -## Synopsis - -Log in to the HTTP interface, generating an HTTP authentication token for a given user: - -~~~ shell -$ cockroach auth-session login {username} [flags] -~~~ - -Log out from the HTTP interface, revoking all active HTTP authentication tokens for a given user: - -~~~ shell -$ cockroach auth-session logout {username} [flags] -~~~ - -List all authenticated sessions to the HTTP interface, including currently active and recently expired sessions: - -~~~ shell -$ cockroach auth-session list [flags] -~~~ - -View help: - -~~~ shell -$ cockroach auth-session --help -~~~ - -~~~ shell -$ cockroach auth-session {subcommand} --help -~~~ - -## Flags - -All three `auth-session` subcommands accept the standard [SQL command-line flags](cockroach-start.html#flags). - -In addition, the `auth-session login` subcommand supports the following flags. - -Flag | Description ------|------------ -`--expire-after` | Duration of the newly-created HTTP authentication token, after which the token expires. Specify the duration in numeric values suffixed by one or more of `h`, `m`, and `s` to indicate hour, minute, and second duration. See the [example](#log-in-to-the-http-interface-with-a-custom-expiry).

**Default:** `1h0m0s` (1 hour) -`--only-cookie` | Limits output to only the newly-created HTTP authentication token (the "cookie") in the response, appropriate for output to other commands. See the [example](#log-in-to-the-http-interface-with-limited-command-output). - -## Response - -The `cockroach auth-session` subcommands return the following fields. - -### `auth-session login` - -Field | Description -------|------------ -`username` | The username of the user authenticated. -`session ID` | The session ID to the HTTP interface previously established for that user. -`authentication cookie` | The cookie that may be used from the command line, or from other tools, to authenticate access to the HTTP interface for that user. - -### `auth-session logout` - -Field | Description -------|------------ -`username` | The username of the user whose session was revoked. -`session ID` | The session ID to the HTTP interface previously established for that user. -`revoked` | The date and time of revocation for that user's authenticated session. - -### `auth-session list` - -Field | Description -------|------------ -`username` | The username of the user authenticated. -`session ID` | The session ID to the HTTP interface established for that user. -`created` | The date and time a session was created. -`expired` | The date and time a session expired. -`revoked` | The date and time of revocation for that user's authenticated session. If the session is still active, this will appear as `NULL`. -`last used` | The date and time of the last access to the HTTP interface using this session token. - -## Required roles - -To run any of the `auth-session` subcommands, you must be a member of the [`admin` role](security-reference/authorization.html#admin-role). The user being authenticated via `login` or `logout` does not require any special roles. - -## Considerations - -- The `login` subcommand allows users with the [`admin` role](security-reference/authorization.html#admin-role) to create HTTP authentication tokens with an arbitrary duration. If operational policy requires stricter control of authentication sessions, you can: - - - Monitor the `system.web_sessions` table for all current and recent HTTP sessions. - - Revoke HTTP authentication tokens as needed with the `logout` subcommand. See the [example](#terminate-all-active-sessions-for-a-user). - - Set the `--expire-after` flag with a shorter duration. See the [example](#log-in-to-the-http-interface-with-a-custom-expiry). - -- The `logout` subcommand logs out all sessions for the given user; you cannot target individual sessions for logout. If more granular control of sessions is desired, consider setting the `--expire-after` flag with a shorter duration. See the [example](#log-in-to-the-http-interface-with-a-custom-expiry). - -## Examples - -### Log in to the HTTP interface - -Log in to the HTTP interface, by generating a new HTTP authentication token for the `web_user` user: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach auth-session login web_user -~~~ - -~~~ - username | session ID | authentication cookie ------------+--------------------+--------------------------------------------------------------------- - web_user | 784445853689282561 | session=CIGAtrWQq7rxChIQTXxYNNQxAYjyLAjHWxgUMQ==; Path=/; HttpOnly; Secure -(1 row) -~~~ - -### Log in to the HTTP interface with a custom expiry - -Log in to the HTTP interface, by generating a new HTTP authentication token for the `web_user` user and specifying a token expiry of 4 hours and 30 minutes: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach auth-session login web_user --expire-after=4h30m -~~~ - -~~~ - username | session ID | authentication cookie ------------+--------------------+--------------------------------------------------------------------- - web_user | 784445853689282561 | session=CIGAtrWQq7rxChIQTXxYNNQxAYjyLAjHWxgUMQ==; Path=/; HttpOnly; Secure -(1 row) -~~~ - -### Log in to the HTTP interface with limited command output - -Log in to the HTTP interface, by generating a new HTTP authentication token for the `web_user` user, limiting command output to only the generated cookie: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach auth-session login web_user --only-cookie -~~~ - -~~~ -session=CIGA6t2q0LrxChIQV8QCF3vuYSasR7h4LPSfmg==; Path=/; HttpOnly; Secure -~~~ - -This is useful if you intend to use the cookie with other command line tools. For example, you might output the generated cookie to a local file, and then pass that file to `curl` using its `--cookie` flag: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach auth-session login web_user --certs-dir=certs --only-cookie > $HOME/.cockroachdb_api_key -$ curl --cookie $HOME/.cockroachdb_api_key https://localhost:8080/_status/logfiles/local -~~~ - -Of course you can also provide the cookie directly: - -{% include_cached copy-clipboard.html %} -~~~ shell -curl --cookie 'session=CIGA8I7/irvxChIQDtZQsMtn3AqpgDko6bldSw==; Path=/; HttpOnly; Secure' https://localhost:8080/_status/logfiles/local -~~~ - -### Terminate all active sessions for a user - -Terminate all active sessions for the `web_user` user: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach auth-session logout web_user -~~~ - -~~~ - username | session ID | revoked ------------+--------------------+----------------------------- - web_user | 784445853689282561 | 2022-08-02 18:24:50.819614 - web_user | 784447132063662081 | 2022-08-02 18:24:50.819614 - web_user | 784449147579924481 | 2022-08-02 18:47:20.105254 - web_user | 784449219848241153 | 2022-08-02 18:47:20.105254 -(4 rows) -~~~ - -Note that the output may include recently revoked sessions for this user as well. - -### List all sessions - -List all authenticated sessions to the HTTP interface, including currently active and recently expired sessions: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach auth-session list -~~~ - -~~~ - username | session ID | created | expires | revoked | last used ------------+--------------------+----------------------------+----------------------------+----------------------------+----------------------------- - root | 784428093743988737 | 2022-08-02 16:47:36.342338 | 2022-08-02 17:47:36.341997 | NULL | 2022-08-02 16:47:36.342338 - root | 784428586862215169 | 2022-08-02 16:50:06.830294 | 2022-08-02 17:50:06.829974 | NULL | 2022-08-02 16:50:06.830294 - web_user | 784447132063662081 | 2022-08-02 18:24:26.37664 | 2022-08-02 19:24:26.376299 | 2022-08-02 18:24:50.819614 | 2022-08-02 18:24:26.37664 - web_user | 784449147579924481 | 2022-08-02 18:34:41.463345 | 2022-08-02 19:34:41.463006 | 2022-08-02 18:47:20.105254 | 2022-08-02 18:34:41.463345 -~~~ - -A value of `NULL` in the `revoked` column indicates that the session is still active. - -## See also - -- [`cockroach` Commands Overview](cockroach-commands.html) -- [DB Console Overview](ui-overview.html) diff --git a/src/current/v22.1/cockroach-cert.md b/src/current/v22.1/cockroach-cert.md deleted file mode 100644 index 75f3774d26e..00000000000 --- a/src/current/v22.1/cockroach-cert.md +++ /dev/null @@ -1,356 +0,0 @@ ---- -title: cockroach cert -summary: A secure CockroachDB cluster uses TLS for encrypted inter-node and client-node communication. -toc: true -key: create-security-certificates.html -docs_area: reference.cli ---- - -To secure your CockroachDB cluster's inter-node and client-node communication, you need to provide a Certificate Authority (CA) certificate that has been used to sign keys and certificates (SSLs) for: - -- Nodes -- Clients -- DB Console (optional) - -To create these certificates and keys, use the `cockroach cert` [commands](cockroach-commands.html) with the appropriate subcommands and flags, use [`openssl` commands](https://wiki.openssl.org/index.php/), or use a [custom CA](create-security-certificates-custom-ca.html) (for example, a public CA or your organizational CA). - -{% include {{ page.version.version }}/filter-tabs/security-cert.md %} - -{{site.data.alerts.callout_success}}For details about when and how to change security certificates without restarting nodes, see Rotate Security Certificates.{{site.data.alerts.end}} - -## How security certificates work - -1. Using the `cockroach cert` command, you create a CA certificate and key and then node and client certificates that are signed by the CA certificate. Since you need access to a copy of the CA certificate and key to create node and client certs, it's best to create everything in one place. - -2. You then upload the appropriate node certificate and key and the CA certificate to each node, and you upload the appropriate client certificate and key and the CA certificate to each client. - -3. When nodes establish contact to each other, and when clients establish contact to nodes, they use the CA certificate to verify each other's identity. - -## Subcommands - -Subcommand | Usage ------------|------ -`create-ca` | Create the self-signed certificate authority (CA), which you'll use to create and authenticate certificates for your entire cluster. -`create-node` | Create a certificate and key for a specific node in the cluster. You specify all addresses at which the node can be reached and pass appropriate flags. -`create-client` | Create a certificate and key for a [specific user](create-user.html) accessing the cluster from a client. You specify the username of the user who will use the certificate and pass appropriate flags. -`list` | List certificates and keys found in the certificate directory. - -## Certificate directory - -When using `cockroach cert` to create node and client certificates, you will need access to a local copy of the CA certificate and key. It is therefore recommended to create all certificates and keys in one place and then distribute node and client certificates and keys appropriately. For the CA key, be sure to store it somewhere safe and keep a backup; if you lose it, you will not be able to add new nodes or clients to your cluster. For a tutorial of this process, see [Manual Deployment](manual-deployment.html). - -## Required keys and certificates - -The `create-*` subcommands generate the CA certificate and all node and client certificates and keys in a single directory specified by the `--certs-dir` flag, with the files named as follows: - -### Node key and certificates - -File name pattern | File usage --------------|------------ -`ca.crt` | CA certificate. -`node.crt` | Server certificate.

`node.crt` must be signed by `ca.crt` and must have `CN=node` and the list of IP addresses and DNS names listed in `Subject Alternative Name` field. CockroachDB also supports [wildcard notation in DNS names](https://en.wikipedia.org/wiki/Wildcard_certificate). -`node.key` | Key for server certificate. - -### Client key and certificates - -File name pattern | File usage --------------|------------ -`ca.crt` | CA certificate. -`client..crt` | Client certificate for `` (e.g., `client.root.crt` for user `root`).

Must be signed by `ca.crt`. Also, `client..crt` must have `CN=` (for example, `CN=marc` for `client.marc.crt`) -`client..key` | Key for the client certificate. - -Optionally, if you have a certificate issued by a public CA to securely access the DB Console, you need to place the certificate and key (`ui.crt` and `ui.key` respectively) in the directory specified by the `--certs-dir` flag. For more information, refer to [Use a UI certificate and key to access the DB Console](create-security-certificates-custom-ca.html#accessing-the-db-console-for-a-secure-cluster). - -Note the following: - -- By default, the `node.crt` is multi-functional, as in the same certificate is used for both incoming connections (from SQL and DB Console clients, and from other CockroachDB nodes) and for outgoing connections to other CockroachDB nodes. To make this possible, the `node.crt` created using the `cockroach cert` command has `CN=node` and the list of IP addresses and DNS names listed in `Subject Alternative Name` field. - -- The CA key is never loaded automatically by `cockroach` commands, so it should be created in a separate directory, identified by the `--ca-key` flag. - -### Key file permissions - -{{site.data.alerts.callout_info}} -This check is only relevant on macOS, Linux, and other UNIX-like systems. -{{site.data.alerts.end}} - -To reduce the likelihood of a malicious user or process accessing a certificate key (files ending in ".key"), we require that the certificate key be owned by one of the following system users: - -- The user that the CockroachDB process runs as. -- The system `root` user (not to be confused with the [CockroachDB `root` user](security-reference/authorization.html#root-user)) and the group that the CockroachDB process runs in. - -For example, if running the CockroachDB process as a system user named `cockroach`, we can use the `id cockroach` command to list each group the `cockroach` user is a member of: - -```shell -id cockroach -uid=1000(cockroach) gid=1000(cockroach) groups=1000(cockroach),1000(cockroach) -``` - -In the output, we can see that the system user `cockroach` is in the `cockroach` group (with the group ID or gid `1000`). - -If the key file is owned by the system `root` user (who has user ID `0`), CockroachDB won't be able to read it unless it has permission to read because of its group membership. Because we know that CockroachDB user is a member of the `cockroach` group, we can allow CockroachDB to read the key by changing the group owner of the key file to the `cockroach` group. We then give the group read permissions by running `chmod`. Notice that the `others` group has no permissions (the `0` of `740`). Only the `cockroach` user, a member of the `cockroach` group, or the system `root` user has permission to read the key. - -```shell -sudo chgrp cockroach ui.key -sudo chmod 0740 ui.key -``` - -However, if the `ui.key` file is owned by the `cockroach` system user, CockroachDB ignores the group ownership of the file, and requires that the permissions only allow the `cockroach` system user to interact with it (`0700` or `rwx------`). - -Note the following: - -- When running in Kubernetes, you will not be able to change the user that owns a certificate file mounted from a Secret or another Volume, but you will be able to override the group by setting the `fsGroup` flag in a Pod or Container's Security Context. In our example above, you would set `fsGroup` to "1000". You will also need to set the key's "mode" using the `mode` flag on individual items or the `defaultMode` flag if applying to the entire secret. - -- This check can be disabled by setting the environment variable `COCKROACH_SKIP_KEY_PERMISSION_CHECK` to `true`. - -## Synopsis - -Create the CA certificate and key: - -~~~ shell -$ cockroach cert create-ca \ - --certs-dir=[path-to-certs-directory] \ - --ca-key=[path-to-ca-key] -~~~ - -Create a node certificate and key: - -~~~ shell -$ cockroach cert create-node \ - [node-hostname] \ - [node-other-hostname] \ - [node-yet-another-hostname] \ - [hostname-in-wildcard-notation] \ - --certs-dir=[path-to-certs-directory] \ - --ca-key=[path-to-ca-key] -~~~ - -Create a client certificate and key: - -~~~ shell -$ cockroach cert create-client \ - [username] \ - --certs-dir=[path-to-certs-directory] \ - --ca-key=[path-to-ca-key] -~~~ - -List certificates and keys: - -~~~ shell -$ cockroach cert list \ - --certs-dir=[path-to-certs-directory] -~~~ - -View help: - -~~~ shell -$ cockroach cert --help -~~~ -~~~ shell -$ cockroach cert --help -~~~ - -## Flags - -The `cert` command and subcommands support the following [general-use](#general) and [logging](#logging) flags. - -### General - -Flag | Description ------|----------- -`--certs-dir` | The path to the [certificate directory](#certificate-directory) containing all certificates and keys needed by `cockroach` commands.

This flag is used by all subcommands.

**Default:** `${HOME}/.cockroach-certs/` -`--ca-key` | The path to the private key protecting the CA certificate.

This flag is required for all `create-*` subcommands. When used with `create-ca` in particular, it defines where to create the CA key; the specified directory must exist.

**Env Variable:** `COCKROACH_CA_KEY` -`--allow-ca-key-reuse` | When running the `create-ca` subcommand, pass this flag to re-use an existing CA key identified by `--ca-key`. Otherwise, a new CA key will be generated.

This flag is used only by the `create-ca` subcommand. It helps avoid accidentally re-using an existing CA key. -`--overwrite` | When running `create-*` subcommands, pass this flag to allow existing files in the certificate directory (`--certs-dir`) to be overwritten.

This flag helps avoid accidentally overwriting sensitive certificates and keys. -`--lifetime` | The lifetime of the certificate, in hours, minutes, and seconds.

Certificates are valid from the time they are created through the duration specified in `--lifetime`.

**Default:** `87840h0m0s` (10 years) -`--key-size` | The size of the CA, node, or client key, in bits.

**Default:** `2048` - `--also-generate-pkcs8-key` | Also create a key in [PKCS#8 format](https://tools.ietf.org/html/rfc5208), which is the standard key encoding format used by Java. For example usage, see [Build a Java App with CockroachDB](build-a-java-app-with-cockroachdb.html). - -### Logging - -{% include {{ page.version.version }}/misc/logging-defaults.md %} - -## Examples - -### Create the CA certificate and key pair - -1. Create two directories: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir certs - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir my-safe-directory - ~~~ - - `certs`: You'll generate your CA certificate and all node and client certificates and keys in this directory and then upload some of the files to your nodes. - - `my-safe-directory`: You'll generate your CA key in this directory and then reference the key when generating node and client certificates. After that, you'll keep the key safe and secret; you will not upload it to your nodes. - -2. Generate the CA certificate and key: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-ca \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ls -l certs - ~~~ - - ~~~ - total 8 - -rw-r--r-- 1 maxroach maxroach 1.1K Jul 10 14:12 ca.crt - ~~~ - -### Create the certificate and key pairs for nodes - -1. Generate the certificate and key for the first node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - node1.example.com \ - node1.another-example.com \ - *.dev.another-example.com \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ls -l certs - ~~~ - - ~~~ - total 24 - -rw-r--r-- 1 maxroach maxroach 1.1K Jul 10 14:12 ca.crt - -rw-r--r-- 1 maxroach maxroach 1.2K Jul 10 14:16 node.crt - -rw------- 1 maxroach maxroach 1.6K Jul 10 14:16 node.key - ~~~ - -2. Upload certificates to the first node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - # Create the certs directory: - $ ssh @ "mkdir certs" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - # Upload the CA certificate and node certificate and key: - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - -3. Delete the local copy of the first node's certificate and key: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ rm certs/node.crt certs/node.key - ~~~ - - {{site.data.alerts.callout_info}}This is necessary because the certificates and keys for additional nodes will also be named node.crt and node.key As an alternative to deleting these files, you can run the next cockroach cert create-node commands with the --overwrite flag.{{site.data.alerts.end}} - -4. Create the certificate and key for the second node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - node2.example.com \ - node2.another-example.com \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ls -l certs - ~~~ - - ~~~ - total 24 - -rw-r--r-- 1 maxroach maxroach 1.1K Jul 10 14:12 ca.crt - -rw-r--r-- 1 maxroach maxroach 1.2K Jul 10 14:17 node.crt - -rw------- 1 maxroach maxroach 1.6K Jul 10 14:17 node.key - ~~~ - -5. Upload certificates to the second node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - # Create the certs directory: - $ ssh @ "mkdir certs" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - # Upload the CA certificate and node certificate and key: - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - -6. Repeat steps 3 - 5 for each additional node. - -### Create the certificate and key pair for a client - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client \ -maxroach \ ---certs-dir=certs \ ---ca-key=my-safe-directory/ca.key -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ ls -l certs -~~~ - -~~~ -total 40 --rw-r--r-- 1 maxroach maxroach 1.1K Jul 10 14:12 ca.crt --rw-r--r-- 1 maxroach maxroach 1.1K Jul 10 14:13 client.maxroach.crt --rw------- 1 maxroach maxroach 1.6K Jul 10 14:13 client.maxroach.key --rw-r--r-- 1 maxroach maxroach 1.2K Jul 10 14:17 node.crt --rw------- 1 maxroach maxroach 1.6K Jul 10 14:17 node.key -~~~ - -### List certificates and keys - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach cert list \ ---certs-dir=certs -~~~ - -~~~ -Certificate directory: certs -+-----------------------+---------------------+---------------------+------------+--------------------------------------------------------+-------+ -| Usage | Certificate File | Key File | Expires | Notes | Error | -+-----------------------+---------------------+---------------------+------------+--------------------------------------------------------+-------+ -| Certificate Authority | ca.crt | | 2027/07/18 | num certs: 1 | | -| Node | node.crt | node.key | 2022/07/14 | addresses: node2.example.com,node2.another-example.com | | -| Client | client.maxroach.crt | client.maxroach.key | 2022/07/14 | user: maxroach | | -+-----------------------+---------------------+---------------------+------------+--------------------------------------------------------+-------+ -(3 rows) -~~~ - -## See also - -- [Security overview](security-reference/security-overview.html) -- [Authentication](authentication.html) -- [Client Connection Parameters](connection-parameters.html) -- [Rotate Security Certificates](rotate-certificates.html) -- [Manual Deployment](manual-deployment.html) -- [Orchestrated Deployment](kubernetes-overview.html) -- [Local Deployment](secure-a-cluster.html) -- [`cockroach` Commands Overview](cockroach-commands.html) diff --git a/src/current/v22.1/cockroach-commands.md b/src/current/v22.1/cockroach-commands.md deleted file mode 100644 index 3135943b6a3..00000000000 --- a/src/current/v22.1/cockroach-commands.md +++ /dev/null @@ -1,59 +0,0 @@ ---- -title: cockroach Commands Overview -summary: Learn the commands for configuring, starting, and managing a CockroachDB cluster. -toc: true -docs_area: reference.cli ---- - -This page introduces the `cockroach` commands for configuring, starting, and managing a CockroachDB cluster, as well as environment variables that can be used in place of certain flags. - -You can run `cockroach help` in your shell to get similar guidance. - -## Commands - -Command | Usage ---------|---- -[`cockroach start`](cockroach-start.html) | Start a node as part of a multi-node cluster. -[`cockroach init`](cockroach-init.html) | Initialize a multi-node cluster. -[`cockroach start-single-node`](cockroach-start-single-node.html) | Start a single-node cluster. -[`cockroach cert`](cockroach-cert.html) | Create CA, node, and client certificates. -[`cockroach sql`](cockroach-sql.html) | Use the built-in SQL client. -[`cockroach sqlfmt`](cockroach-sqlfmt.html) | Reformat SQL queries for enhanced clarity. -[`cockroach node`](cockroach-node.html) | List node IDs, show their status, decommission nodes for removal, or recommission nodes. -[`cockroach nodelocal upload`](cockroach-nodelocal-upload.html) | Upload a file to the `externalIODir` on a node's local file system. -[`cockroach auth-session`](cockroach-auth-session.html) | Create and manage web sessions and authentication tokens to the HTTP interface from the command line. -[`cockroach demo`](cockroach-demo.html) | Start a temporary, in-memory CockroachDB cluster, and open an interactive SQL shell to it. -[`cockroach debug ballast`](cockroach-debug-ballast.html) | Create a large, unused file in a node's storage directory that you can delete if the node runs out of disk space. -[`cockroach debug encryption-active-key`](cockroach-debug-encryption-active-key.html) | View the encryption algorithm and store key. -[`cockroach debug job-trace`](cockroach-debug-job-trace.html) | Generate trace payloads for an executing job from a particular node. -[`cockroach debug list-files`](cockroach-debug-list-files.html) | Show the files that will be collected by using `cockroach debug zip`. -[`cockroach debug merge-logs`](cockroach-debug-merge-logs.html) | Merge log files from multiple nodes into a single time-ordered stream of messages with an added per-message prefix to indicate the corresponding node. -[`cockroach debug tsdump`](cockroach-debug-tsdump.html) | Generate a diagnostic dump of timeseries metrics that can help Cockroach Labs troubleshoot issues with your cluster. -[`cockroach debug zip`](cockroach-debug-zip.html) | Generate a `.zip` file that can help Cockroach Labs troubleshoot issues with your cluster. -[`cockroach convert-url`](connection-parameters.html#convert-a-url-for-different-drivers) | Convert a connection URL to a format recognized by a [supported client driver](third-party-database-tools.html#drivers). -[`cockroach gen`](cockroach-gen.html) | Generate man pages, a bash completion file, example SQL data, or an HAProxy configuration file for a running cluster. -[`cockroach statement-diag`](cockroach-statement-diag.html) | Manage and download statement diagnostics bundles. -[`cockroach userfile upload`](cockroach-userfile-upload.html) | Upload a file to user-scoped file storage. -[`cockroach userfile list`](cockroach-userfile-list.html) | List the files stored in the user-scoped file storage. -[`cockroach userfile get`](cockroach-userfile-get.html) | Fetch a file from the user-scoped file storage. -[`cockroach userfile delete`](cockroach-userfile-delete.html) | Delete the files stored in the user-scoped file storage. -[`cockroach version`](cockroach-version.html) | Output CockroachDB version details. -[`cockroach workload`](cockroach-workload.html) | Run a built-in load generator against a cluster. -[`cockroach import`](cockroach-import.html) | Import a table or database from a local dump file into a running cluster. Supported file formats are `PGDUMP` and `MYSQLDUMP`. - -## Environment variables - -For many common `cockroach` flags, such as `--port` and `--user`, you can set environment variables once instead of manually passing the flags each time you execute commands. - -- To find out which flags support environment variables, see the documentation for each [command](#commands). -- To output the current configuration of CockroachDB and other environment variables, run `env`. -- When a node uses environment variables on [startup](cockroach-start.html), the variable names are printed to the node's logs; however, the variable values are not. - -CockroachDB prioritizes command flags, environment variables, and defaults as follows: - -1. If a flag is set for a command, CockroachDB uses it. -2. If a flag is not set for a command, CockroachDB uses the corresponding environment variable. -3. If neither the flag nor environment variable is set, CockroachDB uses the default for the flag. -4. If there's no flag default, CockroachDB gives an error. - -For more details, see [Client Connection Parameters](connection-parameters.html). diff --git a/src/current/v22.1/cockroach-debug-ballast.md b/src/current/v22.1/cockroach-debug-ballast.md deleted file mode 100644 index d5b33de808b..00000000000 --- a/src/current/v22.1/cockroach-debug-ballast.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -title: cockroach debug ballast -summary: Create a large, unused file in a node's storage directory that you can delete if the node runs out of disk space. -toc: true -docs_area: reference.cli ---- - - CockroachDB automatically creates an emergency ballast file at startup time. The `cockroach debug ballast` command is still available but deprecated. For more information about how automatic ballast file creation works, see [automatic ballast files](cluster-setup-troubleshooting.html#automatic-ballast-files). - -The `cockroach debug ballast` [command](cockroach-commands.html) creates a large, unused file that you can place in a node's storage directory. In the case that a node runs out of disk space and shuts down, you can delete the ballast file to free up enough space to be able to restart the node. - -- Do not run `cockroach debug ballast` with a unix `root` user. Doing so brings the risk of mistakenly affecting system directories or files. -- `cockroach debug ballast` now refuses to overwrite the target ballast file if it already exists. This change is intended to prevent mistaken uses of the `ballast` command. Consider adding an `rm` command to scripts that integrate `cockroach debug ballast`, or provide a new file name every time and then remove the old file. -- In addition to placing a ballast file in each node's storage directory, it is important to actively [monitor remaining disk space](monitoring-and-alerting.html#events-to-alert-on). -- Ballast files may be created in many ways, including the standard `dd` command. `cockroach debug ballast` uses the `fallocate` system call when available, so it will be faster than `dd`. - -## Subcommands - -{% include {{ page.version.version }}/misc/debug-subcommands.md %} - -## Synopsis - -Create a ballast file: - -~~~ shell -$ cockroach debug ballast [path to ballast file] [flags] -~~~ - -View help: - -~~~ shell -$ cockroach debug ballast --help -~~~ - -## Flags - -Flag | Description ------|----------- -`--size`
`-z` | The amount of space to fill, or to leave available, in a node's storage directory via a ballast file. Positive values equal the size of the ballast file. Negative values equal the amount of space to leave after creating the ballast file. This can be a percentage (notated as a decimal or with %) or any bytes-based unit, for example:

`--size=1000000000 ----> 1000000000 bytes`
`--size=1GiB ----> 1073741824 bytes`
`--size=5% ----> 5% of available space`
`--size=0.05 ----> 5% of available space`
`--size=.05 ----> 5% of available space`

**Default:** `1GB` - -## Examples - -### Create a 1GB ballast file (default) - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach debug ballast cockroach-data/ballast.txt -~~~ - -### Create a ballast file of a different size - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach debug ballast cockroach-data/ballast.txt --size=2GB -~~~ - -## See also - -- [`cockroach` Commands Overview](cockroach-commands.html) -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Production Checklist](recommended-production-settings.html) diff --git a/src/current/v22.1/cockroach-debug-encryption-active-key.md b/src/current/v22.1/cockroach-debug-encryption-active-key.md deleted file mode 100644 index cfc9497179b..00000000000 --- a/src/current/v22.1/cockroach-debug-encryption-active-key.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: cockroach debug encryption-active-key -summary: Learn the command for viewing the algorithm and store key for an encrypted store. -toc: true -key: debug-encryption-active-key.html -docs_area: reference.cli ---- - -The `cockroach debug encryption-active-key` [command](cockroach-commands.html) displays the encryption algorithm and store key for an encrypted store. - -## Synopsis - -~~~ shell -$ cockroach debug encryption-active-key [path specified by the store flag] -~~~ - -## Subcommands - -{% include {{ page.version.version }}/misc/debug-subcommands.md %} - -## Example - -Start a node with {{ site.data.products.enterprise }} Encryption At Rest enabled: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start --store=cockroach-data --enterprise-encryption=path=cockroach-data,key=aes-128.key,old-key=plain --insecure --certs-dir=certs -~~~ - -View the encryption algorithm and store key: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach debug encryption-active-key cockroach-data -~~~ - -~~~ -AES128_CTR:be235c29239aa84a48e5e1874d76aebf7fb3c1bdc438cec2eb98de82f06a57a0 -~~~ - -## See also - -- [File an Issue](file-an-issue.html) -- [`cockroach` Commands Overview](cockroach-commands.html) -- [Troubleshooting Overview](troubleshooting-overview.html) diff --git a/src/current/v22.1/cockroach-debug-job-trace.md b/src/current/v22.1/cockroach-debug-job-trace.md deleted file mode 100644 index f78092504aa..00000000000 --- a/src/current/v22.1/cockroach-debug-job-trace.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -title: cockroach debug job-trace -summary: Learn the commands for collecting debug information from all nodes in your cluster. -toc: true -docs_area: reference.cli ---- - -{{site.data.alerts.callout_danger}} -We strongly recommend only using `cockroach debug job-trace` when working directly with the [Cockroach Labs support team](support-resources.html). -{{site.data.alerts.end}} - -The [`cockroach debug job-trace`](cockroach-commands.html) command connects to your cluster and collects trace payloads for a running, traceable [job](show-jobs.html#show-jobs) ([**imports**](import-into.html) or [**backups**](take-full-and-incremental-backups.html)). The trace payloads are helpful for debugging why a job is not running as expected or to add more context to logs gathered from the [`cockroach debug zip`](cockroach-debug-zip.html) command. - -The node that `cockroach debug job-trace` is run against will communicate to all nodes in the cluster in order to retrieve the trace payloads. This will deliver a zip file that contains [trace files](#files) for all the nodes participating in the execution of the job. The files hold information on the executing job's [trace spans](show-trace.html#trace-description), which describe the sub-operations being performed. Specifically, these files will contain the spans that have not yet completed and are associated with the execution of that particular job. Using this command for a job that is not currently running will result in an empty zip file. - -## Synopsis - -~~~ shell -$ cockroach debug job-trace --url= {flags} -~~~ - -See [`SHOW JOBS`](show-jobs.html#show-jobs) for details on capturing a `job_id`. - -## Subcommands - -{% include {{ page.version.version }}/misc/debug-subcommands.md %} - -## Flags - -The `debug job-trace` subcommand supports the following [general-use](#general) and [client connection](#client-connection) flags. - -### General - -Flag | Description ------|----------- -`--timeout` | Return an error if the command does not conclude within a specified nonzero value. The timeout is suffixed with `s` (seconds), `m` (minutes), or `h` (hours). For example:

`--timeout=2m` - -### Client connection - -Flag | Description ------|------------ -`--user`

`-u` | The [SQL user](create-user.html) that will own the client session.

**Env Variable:** `COCKROACH_USER`
**Default:** `root` -`--insecure` | Use an insecure connection.

**Env Variable:** `COCKROACH_INSECURE`
**Default:** `false` -`--cert-principal-map` | A comma-separated list of `:` mappings. This allows mapping the principal in a cert to a DB principal such as `node` or `root` or any SQL user. This is intended for use in situations where the certificate management system places restrictions on the `Subject.CommonName` or `SubjectAlternateName` fields in the certificate (e.g., disallowing a `CommonName` like `node` or `root`). If multiple mappings are provided for the same ``, the last one specified in the list takes precedence. A principal not specified in the map is passed through as-is via the identity function. A cert is allowed to authenticate a DB principal if the DB principal name is contained in the mapped `CommonName` or DNS-type `SubjectAlternateName` fields. -`--certs-dir` | The path to the [certificate directory](cockroach-cert.html) containing the CA and client certificates and client key.

**Env Variable:** `COCKROACH_CERTS_DIR`
**Default:** `${HOME}/.cockroach-certs/` - `--url` | A [connection URL](connection-parameters.html#connect-using-a-url) to use instead of the other arguments.

**Env Variable:** `COCKROACH_URL`
**Default:** no URL - -## Files - -The `cockroach debug job-trace` command will output a zip file to where the command is run (`-job-trace.zip`). The zip file will contain trace files for all the nodes participating in the job's execution. For example, `node1-trace.txt`. - -See the [`SHOW TRACE FOR SESSION`](show-trace.html#response) page for more information on trace responses. - -## Example - -### Generate a job-trace zip file - -To generate the `job-trace` zip file, use your [connection string](cockroach-start.html#standard-output) to pull the trace spans: - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach debug job-trace 698977774875279361 --url=postgresql://root@localhost:26257?sslmode=disable -~~~ - -You will find the zip file in the directory you ran the command from: - -~~~ -698977774875279361-job-trace.zip -~~~ - -## See also - -- [File an Issue](file-an-issue.html) -- [`cockroach` Commands Overview](cockroach-commands.html) -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Support Resources](support-resources.html) diff --git a/src/current/v22.1/cockroach-debug-list-files.md b/src/current/v22.1/cockroach-debug-list-files.md deleted file mode 100644 index 267c8f1a4aa..00000000000 --- a/src/current/v22.1/cockroach-debug-list-files.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -title: cockroach debug list-files -summary: Learn the command for listing the files collected in the debug zip. -toc: true -key: debug-list-files.html -docs_area: reference.cli ---- - -The `cockroach debug list-files` [command](cockroach-commands.html) shows the files that will be collected by using [`cockroach debug zip`](cockroach-debug-zip.html). - -{{site.data.alerts.callout_info}} -The files listed include logs, heap profiles, goroutine dumps, and CPU profiles. Other [files](cockroach-debug-zip.html#files) generated by `cockroach debug zip` are not listed by `cockroach debug list-files`. -{{site.data.alerts.end}} - -## Subcommands - -{% include {{ page.version.version }}/misc/debug-subcommands.md %} - -## Synopsis - -~~~ shell -$ cockroach debug list-files {flags} -~~~ - -## Flags - -The `debug list-files` subcommand supports the following [general-use](#general), [client connection](#client-connection), and [logging](#logging) flags. - -### General - -Flag | Description ------|----------- -`--exclude-files` | [Files](cockroach-debug-zip.html#files) to exclude from the generated `.zip`. This can be used to limit the size of the generated `.zip`, and affects logs, heap profiles, goroutine dumps, and/or CPU profiles. The files are specified as a comma-separated list of [glob patterns](https://en.wikipedia.org/wiki/Glob_(programming)). For example:

`--exclude-files=*.log`

Note that this flag is applied _after_ `--include_files`. -`--exclude-nodes` | Specify nodes to exclude from inspection as a comma-separated list or range of node IDs. For example:

`--exclude-nodes=1,10,13-15` -`--files-from` | Start timestamp for log file, goroutine dump, and heap profile collection. This can be used to limit the size of the generated `.zip`, which is increased by these files. The timestamp uses the format `YYYY-MM-DD`, followed optionally by `HH:MM:SS` or `HH:MM`. For example:

`--files-from='2021-07-01 15:00'`

When specifying a narrow time window, we recommend adding extra seconds/minutes to account for uncertainties such as clock drift.

**Default:** 48 hours before now -`--files-until` | End timestamp for log file, goroutine dump, and heap profile collection. This can be used to limit the size of the generated `.zip`, which is increased by these files. The timestamp uses the format `YYYY-MM-DD`, followed optionally by `HH:MM:SS` or `HH:MM`. For example:

`--files-until='2021-07-01 16:00'`

When specifying a narrow time window, we recommend adding extra seconds/minutes to account for uncertainties such as clock drift.

**Default:** 24 hours beyond now (to include files created during `.zip` creation) -`--format` | Specify a format to display table rows. This can be `tsv`, `csv`, `table`, `records`, `sql`, `raw`, or `html`.

**Default:** `table` (interactive sessions), `tsv` (non-interactive sessions) -`--include-files` | [Files](cockroach-debug-zip.html#files) to include in the generated `.zip`. This can be used to limit the size of the generated `.zip`, and affects logs, heap profiles, goroutine dumps, and/or CPU profiles. The files are specified as a comma-separated list of [glob patterns](https://en.wikipedia.org/wiki/Glob_(programming)). For example:

`--include-files=*.pprof`

Note that this flag is applied _before_ `--exclude-files`. -`--nodes` | Specify nodes to inspect as a comma-separated list or range of node IDs. For example:

`--nodes=1,10,13-15` - -### Client connection - -{% include {{ page.version.version }}/sql/connection-parameters.md %} -`--cluster-name` | The cluster name to use to verify the cluster's identity. If the cluster has a cluster name, you must include this flag. For more information, see [`cockroach start`](cockroach-start.html#general). -`--disable-cluster-name-verification` | Disables the cluster name check for this command. This flag must be paired with `--cluster-name`. For more information, see [`cockroach start`](cockroach-start.html#general). - -### Logging - -{% include {{ page.version.version }}/misc/debug-subcommands.md %} - -## Examples - -### List all collected files - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach debug list-files -~~~ - -{{site.data.alerts.callout_info}} -The files listed include logs, heap profiles, goroutine dumps, and CPU profiles. Other [files](cockroach-debug-zip.html#files) generated by `cockroach debug zip` are not listed by `cockroach debug list-files`. -{{site.data.alerts.end}} - -### List all collected log files - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach debug list-files --include-files=*.log -~~~ - -### List all collected files (TSV format) - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach debug list-files --format=tsv -~~~ diff --git a/src/current/v22.1/cockroach-debug-merge-logs.md b/src/current/v22.1/cockroach-debug-merge-logs.md deleted file mode 100644 index 54c110455ba..00000000000 --- a/src/current/v22.1/cockroach-debug-merge-logs.md +++ /dev/null @@ -1,84 +0,0 @@ ---- -title: cockroach debug merge-logs -summary: Learn the command for merging the collected debug logs from all nodes in your cluster. -toc: true -key: debug-merge-logs.html -docs_area: reference.cli ---- - -The `cockroach debug merge-logs` [command](cockroach-commands.html) merges log files from multiple nodes into a single time-ordered stream of messages with an added per-message prefix to indicate the corresponding node. You can use it in conjunction with logs collected using the [`debug zip`](cockroach-debug-zip.html) command to aid in debugging. - -{{site.data.alerts.callout_danger}} -The file produced by `cockroach debug zip` can contain highly [sensitive, identifiable information](configure-logs.html#redact-logs), such as usernames, hashed passwords, and possibly your table's data. You can use the [`--redact`](#example) flag to redact the sensitive data out of log files and crash reports before sharing them with Cockroach Labs. -{{site.data.alerts.end}} - -## Subcommands - -{% include {{ page.version.version }}/misc/debug-subcommands.md %} - -## Synopsis - -~~~ shell -$ cockroach debug merge-logs [log file directory] [flags] -~~~ - -## Flags - -Use the following flags to filter the `debug merge-logs` results for a specified regular expression or time range. - -Flag | Description ------|----------- -`--filter` | Limit the results to the specified regular expression -`--from` | Start time for the time range filter. -`--to` | End time for the time range filter. -`--redact` | Redact [sensitive data](configure-logs.html#redact-logs) from the log files. - -## Example - -Generate a debug zip file: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach debug zip ./cockroach-data/logs/debug.zip --insecure -~~~ - -Unzip the file: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ unzip ./cockroach-data/logs/debug.zip -~~~ - -Merge the logs in the debug folder: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach debug merge-logs debug/nodes/*/logs/* -~~~ - -Alternatively, filter the merged logs for a specified time range: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach debug merge-logs debug/nodes/*/logs/* --from="220713 18:36:28.208553" --to="220713 18:36:29.232864" -~~~ - -You can also filter the merged logs for a regular expression: - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach debug merge-logs debug/nodes/*/logs/* --filter="RUNNING IN INSECURE MODE" -~~~ - -You can redact sensitive information from the merged logs: - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach debug merge-logs --redact debug/nodes/*/logs/* -~~~ - -## See also - -- [File an Issue](file-an-issue.html) -- [`cockroach` Commands Overview](cockroach-commands.html) -- [Troubleshooting Overview](troubleshooting-overview.html) diff --git a/src/current/v22.1/cockroach-debug-tsdump.md b/src/current/v22.1/cockroach-debug-tsdump.md deleted file mode 100644 index c0ddfd4c418..00000000000 --- a/src/current/v22.1/cockroach-debug-tsdump.md +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: cockroach debug tsdump -summary: Learn the commands for collecting timeseries debug information from all nodes in your cluster. -toc: true -key: debug-tsdump.html -docs_area: reference.cli ---- - -The `cockroach debug tsdump` [command](cockroach-commands.html) connects to your cluster and collects timeseries diagnostic data from each active node (inactive nodes are not included). This includes both current and historical runtime metrics for your cluster, including those exposed in the [DB Console Metrics](ui-overview-dashboard.html) pages as well as internal metrics. - -`cockroach debug tsdump` is mostly used in tandem with the [`cockroach debug zip`](cockroach-debug-zip.html) command to gather diagnostic data during escalations to Cockroach Labs support. Follow the steps in this procedure to gather and prepare the timeseries diagnostic data and prepare it for transit to Cockroach Labs. - -## Subcommands - -{% include {{ page.version.version }}/misc/debug-subcommands.md %} - -## Synopsis - -~~~ shell -$ cockroach debug tsdump {flags} > {dump file destination} -~~~ - -{{site.data.alerts.callout_info}} -The following [flags](#flags) must apply to an active CockroachDB node. If no nodes are live, you must [start at least one node](cockroach-start.html). -{{site.data.alerts.end}} - -## Flags - -The `debug tsdump` subcommand supports the following [general-use](#general), [client connection](#client-connection), and [logging](#logging) flags. - -### General - -Flag | Description ------|----------- -`--format` | The output format to write the collected diagnostic data. Valid options are `text`, `csv`, `tsv`, `raw`.

**Default:** `text` -`--from` | The oldest timestamp to include (inclusive), in the format `YYYY-MM-DD [HH:MM[:SS]]`.

**Default:** `0001-01-01 00:00:00` -`--to` | The newest timestamp to include (inclusive), in the format `YYYY-MM-DD [HH:MM[:SS]]`.

**Default:** Current timestamp plus 29 hours - -### Client connection - -Flag | Description ------|------------ -`--cert-principal-map` | A comma-separated list of `:` mappings. This allows mapping the principal in a cert to a DB principal such as `node` or `root` or any SQL user. This is intended for use in situations where the certificate management system places restrictions on the `Subject.CommonName` or `SubjectAlternateName` fields in the certificate (e.g., disallowing a `CommonName` like `node` or `root`). If multiple mappings are provided for the same ``, the last one specified in the list takes precedence. A principal not specified in the map is passed through as-is via the identity function. A cert is allowed to authenticate a DB principal if the DB principal name is contained in the mapped `CommonName` or DNS-type `SubjectAlternateName` fields. -`--certs-dir` | The path to the [certificate directory](cockroach-cert.html) containing the CA and client certificates and client key.

**Env Variable:** `COCKROACH_CERTS_DIR`
**Default:** `${HOME}/.cockroach-certs/` -`--cluster-name` | The cluster name to use to verify the cluster's identity. If the cluster has a cluster name, you must include this flag. For more information, see [`cockroach start`](cockroach-start.html#general). -`--disable-cluster-name-verification` | Disables the cluster name check for this command. This flag must be paired with `--cluster-name`. For more information, see [`cockroach start`](cockroach-start.html#general). -`--host` | The server host and port number to connect to. This can be the address of any node in the cluster.

**Env Variable:** `COCKROACH_HOST`
**Default:** `localhost:26257` -`--insecure` | Use an insecure connection.

**Env Variable:** `COCKROACH_INSECURE`
**Default:** `false` - `--url` | A [connection URL](connection-parameters.html#connect-using-a-url) to use instead of the other arguments. To convert a connection URL to the syntax that works with your client driver, run [`cockroach convert-url`](connection-parameters.html#convert-a-url-for-different-drivers).

**Env Variable:** `COCKROACH_URL`
**Default:** no URL - -### Logging - -By default, this command logs messages to `stdout`. If you need to troubleshoot this command's behavior, you can [customize its logging behavior](configure-logs.html). - -## Examples - -### Generate a tsdump `gob` file - -Generate the tsdump `gob` file for an insecure CockroachDB cluster: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach debug tsdump --format=raw --insecure > tsdump.gob -~~~ - -Generate the tsdump `gob` file for a secure CockroachDB cluster: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach debug tsdump --format=raw --certs-dir=${HOME}/.cockroach-certs/ > tsdump.gob -~~~ - -{{site.data.alerts.callout_info}} -Secure examples assume you have the appropriate certificates in the default certificate directory, `${HOME}/.cockroach-certs/`. See the [`cockroach cert`](cockroach-cert.html) documentation for more information. -{{site.data.alerts.end}} - -### Generate a tsdump `gob` file and compress using `gzip` - -Generate a tsdump `gob` file for an insecure CockroachDB cluster, and compress using `gzip` in preparation to send to Cockroach Labs for troubleshooting. Your server must have `gzip` installed: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach debug tsdump --format=raw --insecure > tsdump.gob -gzip tsdump.gob -~~~ - -Generate a tsdump `gob` file for a secure CockroachDB cluster, and compress using `gzip` in preparation to send to Cockroach Labs for troubleshooting. Your server must have `gzip` installed: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach debug tsdump --format=raw --certs-dir=${HOME}/.cockroach-certs/ > tsdump.gob -gzip tsdump.gob -~~~ - -{{site.data.alerts.callout_info}} -Secure examples assume you have the appropriate certificates in the default certificate directory, `${HOME}/.cockroach-certs/`. See the [`cockroach cert`](cockroach-cert.html) documentation for more information. -{{site.data.alerts.end}} - -### Generate a tsdump `gob` file with a custom timestamp range - -Generate a tsdump `gob` file specifying a custom timestamp range to limit the data collection to a specific interval. This is useful for reducing the size of the resulting `gob` file if the data needed to troubleshoot falls within a known timestamp range: - -~~~ shell -$ cockroach debug tsdump --format=raw --from='2023-01-10 01:00:00' --to='2023-01-20 23:59:59' > tsdump.gob -~~~ - -## See also - -- [File an Issue](file-an-issue.html) -- [`cockroach` Commands Overview](cockroach-commands.html) -- [Troubleshooting Overview](troubleshooting-overview.html) diff --git a/src/current/v22.1/cockroach-debug-zip.md b/src/current/v22.1/cockroach-debug-zip.md deleted file mode 100644 index da1b3737e0b..00000000000 --- a/src/current/v22.1/cockroach-debug-zip.md +++ /dev/null @@ -1,173 +0,0 @@ ---- -title: cockroach debug zip -summary: Learn the commands for collecting debug information from all nodes in your cluster. -toc: true -key: debug-zip.html -docs_area: reference.cli ---- - -The `cockroach debug zip` [command](cockroach-commands.html) connects to your cluster and gathers information from each active node into a single `.zip` file (inactive nodes are not included). For details on the `.zip` contents, see [Files](#files). - -You can use the [`cockroach debug merge-logs`](cockroach-debug-merge-logs.html) command in conjunction with `cockroach debug zip` to merge the collected logs into one file, making them easier to parse. - -{{site.data.alerts.callout_danger}} -The files produced by `cockroach debug zip` can contain [highly sensitive, personally-identifiable information (PII)](configure-logs.html#redact-logs), such as usernames, hashed passwords, and possibly table data. Use the [`--redact`](#redact-sensitive-information) flag to configure CockroachDB to redact sensitive data when generating the `.zip` file (excluding range keys) if intending to share it with Cockroach Labs. -{{site.data.alerts.end}} - -## Details - -### Use cases - -There are two scenarios in which `debug zip` is useful: - -- To collect all of your nodes' logs, which you can then parse to locate issues. You can optionally use the [flags](#flags) to [retrieve only the log files](#generate-a-debug-zip-file-with-logs-only). For more information about logs, see [Logging](logging-overview.html). Also note: - - - Nodes that are currently [down](cluster-setup-troubleshooting.html#node-liveness-issues) cannot deliver their logs over the network. For these nodes, you must log on to the machine where the `cockroach` process would otherwise be running, and gather the files manually. - - - Nodes that are currently up but disconnected from other nodes (e.g., because of a [network partition](cluster-setup-troubleshooting.html#network-partition)) may not be able to respond to `debug zip` requests forwarded by other nodes, but can still respond to requests for data when asked directly. In such situations, we recommend using the [`--host` flag](#client-connection) to point `debug zip` at each of the disconnected nodes until data has been gathered for the entire cluster. - -- If you experience severe or difficult-to-reproduce issues with your cluster, Cockroach Labs might ask you to send us your cluster's debugging information using `cockroach debug zip`. - -### Files - -`cockroach debug zip` collects log files, heap profiles, CPU profiles, and goroutine dumps from the last 48 hours, by default. - -{{site.data.alerts.callout_success}} -These files can greatly increase the size of the `cockroach debug zip` output. To limit the `.zip` file size for a large cluster, we recommend first experimenting with [`cockroach debug list-files`](cockroach-debug-list-files.html) and then using [flags](#flags) to filter the files. -{{site.data.alerts.end}} - -The following files collected by `cockroach debug zip`, which are found in the individual node directories, can be filtered using the `--exclude-files`, `--include-files`, `--files-from`, and/or `--files-until` [flags](#flags): - -| Information | Filename | -|------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------| -| [Log files](configure-logs.html#log-file-naming) | `cockroach-{log-file-group}.{host}.{user}.{start timestamp in UTC}.{process ID}.log` | -| Goroutine dumps | `goroutine_dump.{date-and-time}.{metadata}.double_since_last_dump.{metadata}.txt.gz` | -| Heap profiles | `memprof.{date-and-time}.{heapsize}.pprof` | -| Memory statistics | `memstats.{date-and-time}.{heapsize}.txt` | -| CPU profiles | `cpuprof.{date-and-time}` | -| [Active query dumps](cluster-setup-troubleshooting.html#out-of-memory-oom-crash) | `activequeryprof.{date-and-time}.csv` | - -The following information is also contained in the `.zip` file, and cannot be filtered: - -- Cluster events -- Database details -- Schema change events -- Database, table, node, and range lists -- Node details -- Node liveness -- Gossip data -- Stack traces -- Range details -- Jobs -- [Cluster Settings](cluster-settings.html) -- [Metrics](metrics.html) -- [Replication Reports](query-replication-reports.html) -- Problem ranges -- CPU profiles -- A script (`hot-ranges.sh`) that summarizes the hottest ranges (ranges receiving a high number of reads or writes) - -## Subcommands - -{% include {{ page.version.version }}/misc/debug-subcommands.md %} - -## Synopsis - -~~~ shell -$ cockroach debug zip {ZIP file destination} {flags} -~~~ - -{{site.data.alerts.callout_info}} -The following [flags](#flags) must apply to an active CockroachDB node. If no nodes are live, you must [start at least one node](cockroach-start.html). -{{site.data.alerts.end}} - -## Flags - -The `debug zip` subcommand supports the following [general-use](#general), [client connection](#client-connection), and [logging](#logging) flags. - -### General - -Flag | Description ------|----------- -`--cpu-profile-duration` | Fetch CPU profiles from the cluster with the specified sample duration in seconds. The `debug zip` command will block for the duration specified. A value of `0` disables this feature.

**Default:** `5` -`--concurrency` | The maximum number of nodes to concurrently poll for data. This can be any value between `1` and `15`. -`--exclude-files` | [Files](#files) to exclude from the generated `.zip`. This can be used to limit the size of the generated `.zip`, and affects logs, heap profiles, goroutine dumps, and/or CPU profiles. The files are specified as a comma-separated list of [glob patterns](https://en.wikipedia.org/wiki/Glob_(programming)). For example:

`--exclude-files=*.log`

Note that this flag is applied _after_ `--include_files`. Use [`cockroach debug list-files`](cockroach-debug-list-files.html) with this flag to see a list of files that will be contained in the `.zip`. -`--exclude-nodes` | Specify nodes to exclude from inspection as a comma-separated list or range of node IDs. For example:

`--exclude-nodes=1,10,13-15` -`--files-from` | Start timestamp for log file, goroutine dump, and heap profile collection. This can be used to limit the size of the generated `.zip`, which is increased by these files. The timestamp uses the format `YYYY-MM-DD`, followed optionally by `HH:MM:SS` or `HH:MM`. For example:

`--files-from='2021-07-01 15:00'`

When specifying a narrow time window, we recommend adding extra seconds/minutes to account for uncertainties such as clock drift.

**Default:** 48 hours before now -`--files-until` | End timestamp for log file, goroutine dump, and heap profile collection. This can be used to limit the size of the generated `.zip`, which is increased by these files. The timestamp uses the format `YYYY-MM-DD`, followed optionally by `HH:MM:SS` or `HH:MM`. For example:

`--files-until='2021-07-01 16:00'`

When specifying a narrow time window, we recommend adding extra seconds/minutes to account for uncertainties such as clock drift.

**Default:** 24 hours beyond now (to include files created during `.zip` creation) -`--include-files` | [Files](#files) to include in the generated `.zip`. This can be used to limit the size of the generated `.zip`, and affects logs, heap profiles, goroutine dumps, and/or CPU profiles. The files are specified as a comma-separated list of [glob patterns](https://en.wikipedia.org/wiki/Glob_(programming)). For example:

`--include-files=*.pprof`

Note that this flag is applied _before_ `--exclude-files`. Use [`cockroach debug list-files`](cockroach-debug-list-files.html) with this flag to see a list of files that will be contained in the `.zip`. -`--nodes` | Specify nodes to inspect as a comma-separated list or range of node IDs. For example:

`--nodes=1,10,13-15` -`--redact` | **New in v22.1.9** Redact sensitive data from the generated `.zip`, with the exception of range keys, which must remain unredacted because they are essential to support CockroachDB. See [Redact sensitive information](#redact-sensitive-information) for an example. -`--redact-logs` | **Deprecated** Redact [sensitive data](configure-logs.html#redact-logs) from the log files. Note that this flag removes sensitive information only from the log files. The other items (listed above) collected by the `debug zip` command may still contain sensitive information. To redact sensitive data across the entire generated `.zip`, use the `--redact` flag instead. -`--timeout` | Return an error if the command does not conclude within a specified nonzero value. The timeout is suffixed with `s` (seconds), `m` (minutes), or `h` (hours). For example:

`--timeout=2m` - -### Client connection - -Flag | Description ------|------------ -`--cert-principal-map` | A comma-separated list of `:` mappings. This allows mapping the principal in a cert to a DB principal such as `node` or `root` or any SQL user. This is intended for use in situations where the certificate management system places restrictions on the `Subject.CommonName` or `SubjectAlternateName` fields in the certificate (e.g., disallowing a `CommonName` like `node` or `root`). If multiple mappings are provided for the same ``, the last one specified in the list takes precedence. A principal not specified in the map is passed through as-is via the identity function. A cert is allowed to authenticate a DB principal if the DB principal name is contained in the mapped `CommonName` or DNS-type `SubjectAlternateName` fields. -`--certs-dir` | The path to the [certificate directory](cockroach-cert.html) containing the CA and client certificates and client key.

**Env Variable:** `COCKROACH_CERTS_DIR`
**Default:** `${HOME}/.cockroach-certs/` -`--cluster-name` | The cluster name to use to verify the cluster's identity. If the cluster has a cluster name, you must include this flag. For more information, see [`cockroach start`](cockroach-start.html#general). -`--disable-cluster-name-verification` | Disables the cluster name check for this command. This flag must be paired with `--cluster-name`. For more information, see [`cockroach start`](cockroach-start.html#general). -`--host` | The server host and port number to connect to. This can be the address of any node in the cluster.

**Env Variable:** `COCKROACH_HOST`
**Default:** `localhost:26257` -`--insecure` | Use an insecure connection.

**Env Variable:** `COCKROACH_INSECURE`
**Default:** `false` - `--url` | A [connection URL](connection-parameters.html#connect-using-a-url) to use instead of the other arguments. To convert a connection URL to the syntax that works with your client driver, run [`cockroach convert-url`](connection-parameters.html#convert-a-url-for-different-drivers).

**Env Variable:** `COCKROACH_URL`
**Default:** no URL - -### Logging - -{% include {{ page.version.version }}/misc/logging-defaults.md %} - -## Examples - -### Generate a debug zip file - -Generate the debug zip file for an insecure cluster: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach debug zip ./cockroach-data/logs/debug.zip --insecure --host=200.100.50.25 -~~~ - -Generate the debug zip file for a secure cluster: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach debug zip ./cockroach-data/logs/debug.zip --host=200.100.50.25 -~~~ - -{{site.data.alerts.callout_info}} -Secure examples assume you have the appropriate certificates in the default certificate directory, `${HOME}/.cockroach-certs/`. -{{site.data.alerts.end}} - -### Generate a debug zip file with logs only - -Generate a debug zip file containing only log files: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach debug zip ./cockroach-data/logs/debug.zip --include-files=*.log -~~~ - -### Redact sensitive information - -Example of a log string without redaction enabled: - -~~~ -server/server.go:1423 ⋮ password of user ‹admin› was set to ‹"s3cr34?!@x_"› -~~~ - -Enable log redaction: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach debug zip ./cockroach-data/logs/debug.zip --redact --insecure --host=200.100.50.25 -~~~ - -~~~ -server/server.go:1423 ⋮ password of user ‹×› was set to ‹×› -~~~ - -## See also - -- [File an Issue](file-an-issue.html) -- [`cockroach` Commands Overview](cockroach-commands.html) -- [Troubleshooting Overview](troubleshooting-overview.html) diff --git a/src/current/v22.1/cockroach-demo.md b/src/current/v22.1/cockroach-demo.md deleted file mode 100644 index 163e574cacd..00000000000 --- a/src/current/v22.1/cockroach-demo.md +++ /dev/null @@ -1,683 +0,0 @@ ---- -title: cockroach demo -summary: Use cockroach demo to open a SQL shell to a temporary, in-memory, CockroachDB cluster. -toc: true -docs_area: reference.cli ---- - -The `cockroach demo` [command](cockroach-commands.html) starts a temporary, in-memory CockroachDB cluster of one or more nodes, with or without a preloaded dataset, and opens an interactive SQL shell to the cluster. - -- All [SQL shell](#sql-shell) commands, client-side options, help, and shortcuts supported by the [`cockroach sql`](cockroach-sql.html) command are also supported by `cockroach demo`. -- The in-memory cluster persists only as long as the SQL shell is open. As soon as the shell is exited, the cluster and all its data are permanently destroyed. This command is therefore recommended only as an easy way to experiment with the CockroachDB SQL dialect. -- By default, `cockroach demo` starts in secure mode using TLS certificates to encrypt network communication. It also serves a local [DB Console](#connection-parameters) that does not use TLS encryption. -- Each instance of `cockroach demo` loads a temporary [Enterprise license](https://www.cockroachlabs.com/get-cockroachdb/enterprise/) that expires after 24 hours. To prevent the loading of a temporary license, set the `--disable-demo-license` flag. -- `cockroach demo` opens the SQL shell with a new [SQL user](security-reference/authorization.html#sql-users) named `demo`. The `demo` user is assigned a random password and granted the [`admin` role](security-reference/authorization.html#admin-role). - -{{site.data.alerts.callout_danger}} -`cockroach demo` is designed for testing purposes only. It is not suitable for production deployments. To see a list of recommendations for production deployments, see the [Production Checklist](recommended-production-settings.html). -{{site.data.alerts.end}} - -## Synopsis - -View help for `cockroach demo`: - -~~~ shell -$ cockroach demo --help -~~~ - -Start a single-node demo cluster with the `movr` dataset pre-loaded: - -~~~ shell -$ cockroach demo -~~~ - -Load a different dataset into a demo cluster: - -~~~ shell -$ cockroach demo -~~~ - -Run the `movr` workload against a demo cluster: - -~~~ shell -$ cockroach demo --with-load -~~~ - -Execute SQL from the command line against a demo cluster: - -~~~ shell -$ cockroach demo --execute=";" --execute="" -~~~ - -Start a multi-node demo cluster: - -~~~ shell -$ cockroach demo --nodes= -~~~ - -Start a multi-region demo cluster with default region and zone localities: - -~~~ shell -$ cockroach demo --global --nodes= -~~~ - -Start a multi-region demo cluster with manually defined localities: - -~~~ shell -$ cockroach demo --nodes= --demo-locality= -~~~ - -Stop a demo cluster: - -~~~ sql -> \q -~~~ - -~~~ sql -> quit -~~~ - -~~~ sql -> exit -~~~ - -~~~ shell -ctrl-d -~~~ - - -## Datasets - -{{site.data.alerts.callout_success}} -By default, the `movr` dataset is pre-loaded into a demo cluster. To load a different dataset, use [`cockroach demo `](#load-a-sample-dataset-into-a-demo-cluster). To start a demo cluster without a pre-loaded dataset, pass the `--no-example-database` flag. -{{site.data.alerts.end}} - -Workload | Description ----------|------------ -`bank` | A `bank` database, with one `bank` table containing account details. -`intro` | An `intro` database, with one table, `mytable`, with a hidden message. -`kv` | A `kv` database, with one key-value-style table. -`movr` | A `movr` database, with several tables of data for the [MovR example application](movr.html).

By default, `cockroach demo` loads the `movr` database as the [current database](sql-name-resolution.html#current-database), with sample region (`region`) and availability zone (`az`) replica localities for each node specified with the [`--nodes` flag](cockroach-demo.html#flags). -`startrek` | A `startrek` database, with two tables, `episodes` and `quotes`. -`tpcc` | A `tpcc` database, with a rich schema of multiple tables. -`ycsb` | A `ycsb` database, with a `usertable` from the Yahoo! Cloud Serving Benchmark. - -## Flags - -### General - -The `demo` command supports the following general-use flags. - -Flag | Description ------|------------ -`--auto-enable-rangefeeds` | Override the default behavior of `cockroach demo`, which has rangefeeds enabled on startup. If you do not need to use [changefeeds](create-and-configure-changefeeds.html) with your demo cluster, use `--auto-enable-rangefeeds=false` to disable rangefeeds and improve performance. See [Enable rangefeeds](create-and-configure-changefeeds.html#enable-rangefeeds) for more detail.

**Default:** `true` -`--cache` | For each demo node, the total size for caches. This can be a percentage (notated as a decimal or with `%`) or any bytes-based unit, for example:

`--cache=.25`
`--cache=25%`
`--cache=1000000000 ----> 1000000000 bytes`
`--cache=1GB ----> 1000000000 bytes`
`--cache=1GiB ----> 1073741824 bytes`

**Default:** `64MiB` -`--demo-locality` | Specify [locality](cockroach-start.html#locality) information for each demo node. The input is a colon-separated list of key-value pairs, where the ith pair is the locality setting for the ith demo cockroach node.

For example, the following option assigns node 1's region to `us-east1` and availability zone to `1`, node 2's region to `us-east2` and availability zone to `2`, and node 3's region to `us-east3` and availability zone to `3`:

`--demo-locality=region=us-east1,az=1:region=us-east1,az=2:region=us-east1,az=3`

By default, `cockroach demo` uses sample region (`region`) and availability zone (`az`) replica localities for each node specified with the `--nodes` flag. -`--disable-demo-license` | Start the demo cluster without loading a temporary [Enterprise license](https://www.cockroachlabs.com/get-started-cockroachdb/) that expires after 24 hours.

Setting the `COCKROACH_SKIP_ENABLING_DIAGNOSTIC_REPORTING` environment variable will also prevent the loading of a temporary license, along with preventing the sharing of anonymized [diagnostic details](diagnostics-reporting.html) with Cockroach Labs. -`--echo-sql` | Reveal the SQL statements sent implicitly by the command-line utility. This can also be enabled within the interactive SQL shell via the `\set echo` [shell command](#commands). -`--embedded` | Minimizes the SQL shell [welcome text](#welcome-text) to be appropriate for embedding in playground-type environments. Specifically, this flag removes details that users in an embedded environment have no control over (e.g., networking information). -`--no-example-database` | Start the demo cluster without a pre-loaded dataset.
To obtain this behavior automatically in every new `cockroach demo` session, set the `COCKROACH_NO_EXAMPLE_DATABASE` environment variable to `true`. -`--execute`

`-e` | Execute SQL statements directly from the command line, without opening a shell. This flag can be set multiple times, and each instance can contain one or more statements separated by semi-colons.

If an error occurs in any statement, the command exits with a non-zero status code and further statements are not executed. The results of each statement are printed to the standard output (see `--format` for formatting options). -`--format` | How to display table rows printed to the standard output. Possible values: `tsv`, `csv`, `table`, `raw`, `records`, `sql`, `html`.

**Default:** `table` for sessions that [output on a terminal](cockroach-sql.html#session-and-output-types); `tsv` otherwise

This flag corresponds to the `display_format` [client-side option](#client-side-options) for use in interactive sessions. -`--geo-partitioned-replicas` | Start a 9-node demo cluster with [geo-partitioning](partitioning.html) applied to the [`movr`](movr.html) database. -`--global` | Simulates a [multi-region cluster](simulate-a-multi-region-cluster-on-localhost.html) which sets the [`--locality` flag on node startup](cockroach-start.html#locality) to three different regions. It also simulates the network latency that would occur between them given the specified localities. In order for this to operate as expected, with 3 nodes in each of 3 regions, you must also pass the `--nodes 9` argument. -`--http-port` | Specifies a custom HTTP port to the [DB Console](ui-overview.html) for the first node of the demo cluster.

In multi-node clusters, the HTTP ports for additional clusters increase from the port of the first node, in increments of 1. For example, if the first node has an HTTP port of `5000`, the second node will have the HTTP port `5001`. -`--insecure` | Include this to start the demo cluster in insecure mode.

**Env Variable:** `COCKROACH_INSECURE` -`--listening-url-file` | The file to which the node's SQL connection URL will be written as soon as the demo cluster is initialized and the node is ready to accept connections.

This flag is useful for automation because it allows you to wait until the demo cluster has been initialized so that subsequent commands can connect automatically. -`--max-sql-memory` | For each demo node, the maximum in-memory storage capacity for temporary SQL data, including prepared queries and intermediate data rows during query execution. This can be a percentage (notated as a decimal or with `%`) or any bytes-based unit, for example:

`--max-sql-memory=.25`
`--max-sql-memory=25%`
`--max-sql-memory=10000000000 ----> 1000000000 bytes`
`--max-sql-memory=1GB ----> 1000000000 bytes`
`--max-sql-memory=1GiB ----> 1073741824 bytes`

**Default:** `128MiB` -`--nodes` | Specify the number of in-memory nodes to create for the demo.

**Default:** 1 -`--safe-updates` | Disallow potentially unsafe SQL statements, including `DELETE` without a `WHERE` clause, `UPDATE` without a `WHERE` clause, and `ALTER TABLE ... DROP COLUMN`.

**Default:** `true` for [interactive sessions](cockroach-sql.html#session-and-output-types); `false` otherwise

Potentially unsafe SQL statements can also be allowed/disallowed for an entire session via the `sql_safe_updates` [session variable](set-vars.html). -`--set` | Set a [client-side option](#client-side-options) before starting the SQL shell or executing SQL statements from the command line via `--execute`. This flag may be specified multiple times, once per option.

After starting the SQL shell, the `\set` and `unset` commands can be use to enable and disable client-side options as well. -`--sql-port` | Specifies a custom SQL port for the first node of the demo cluster.

In multi-node clusters, the SQL ports for additional clusters increase from the port of the first node, in increments of 1. For example, if the first node has the SQL port `3000`, the second node will the SQL port `3001`. -`--with-load` | Run a demo [`movr`](movr.html) workload against the preloaded `movr` database.

When running a multi-node demo cluster, load is balanced across all nodes. - -### Logging - -By default, the `demo` command does not log messages. - -If you need to troubleshoot this command's behavior, you can [customize its logging behavior](configure-logs.html). - -## SQL shell - -### Welcome text - -When the SQL shell connects to the demo cluster at startup, it prints a welcome text with some tips and cluster details. Most of these details resemble the [welcome text](cockroach-sql.html#welcome-message) that is printed when connecting `cockroach sql` to a permanent cluster. `cockroach demo` also includes some [connection parameters](#connection-parameters) for connecting to the DB Console or for connecting another SQL client to the demo cluster. - -~~~ shell -# -# Welcome to the CockroachDB demo database! -# -# You are connected to a temporary, in-memory CockroachDB cluster of 9 nodes. -# -# This demo session will attempt to enable Enterprise features -# by acquiring a temporary license from Cockroach Labs in the background. -# To disable this behavior, set the environment variable -# COCKROACH_SKIP_ENABLING_DIAGNOSTIC_REPORTING=true. -# -# Beginning initialization of the movr dataset, please wait... -# -# Waiting for license acquisition to complete... -# -# Partitioning the demo database, please wait... -# -# The cluster has been preloaded with the "movr" dataset -# (MovR is a fictional vehicle sharing company). -# -# Reminder: your changes to data stored in the demo session will not be saved! -# -# If you wish to access this demo cluster using another tool, you will need -# the following details: -# -# - Connection parameters: -# (webui) http://127.0.0.1:8080/demologin?password=demo55826&username=demo -# (sql) postgresql://demo:demo55826@127.0.0.1:26257/movr?sslmode=require -# (sql/jdbc) jdbc:postgresql://127.0.0.1:26257/movr?password=demo55826&sslmode=require&user=demo -# (sql/unix) postgresql://demo:demo55826@/movr?host=%2Fvar%2Ffolders%2F8c%2F915dtgrx5_57bvc5tq4kpvqr0000gn%2FT%2Fdemo699845497&port=26257 -# -# To display connection parameters for other nodes, use \demo ls. -# - Username: "demo", password: "demo55826" -# - Directory with certificate files (for certain SQL drivers/tools): /var/folders/8c/915dtgrx5_57bvc5tq4kpvqr0000gn/T/demo699845497 -# -# Server version: CockroachDB CCL {{ page.release_info.version }} (x86_64-apple-darwin20.5.0, built {{ page.release_info.build_time }}) (same version as client) -# Cluster ID: f78b7feb-b6cf-4396-9d7f-494982d7d81e -# Organization: Cockroach Demo -# -# Enter \? for a brief introduction. -# -~~~ - -### Connection parameters - -The SQL shell welcome text includes connection parameters for accessing the DB Console and for connecting other SQL clients to the demo cluster: - -~~~ -# - Connection parameters: -# (webui) http://127.0.0.1:8080/demologin?password=demo55826&username=demo -# (sql) postgresql://demo:demo55826@127.0.0.1:26257/movr?sslmode=require -# (sql/jdbc) jdbc:postgresql://127.0.0.1:26257/movr?password=demo55826&sslmode=require&user=demo -# (sql/unix) postgresql://demo:demo55826@/movr?host=%2Fvar%2Ffolders%2F8c%2F915dtgrx5_57bvc5tq4kpvqr0000gn%2FT%2Fdemo699845497&port=26257 -~~~ - -Parameter | Description -----------|------------ -`webui` | Use this link to access a local [DB Console](ui-overview.html) to the demo cluster. -`sql` | Use this connection URL for standard sql/tcp connections from other SQL clients such as [`cockroach sql`](cockroach-sql.html).
The default SQL port for the first node of a demo cluster is `26257`. -`sql/unix` | Use this connection URL to establish a [Unix domain socket connection](cockroach-sql.html#connect-to-a-cluster-listening-for-unix-domain-socket-connections) with a client that is installed on the same machine. - -{{site.data.alerts.callout_info}} -You do not need to create or specify node and client certificates in `sql` or `sql/unix` connection URLs. Instead, you can securely connect to the demo cluster with the random password generated for the `demo` user. -{{site.data.alerts.end}} - -When running a multi-node demo cluster, use the `\demo ls` [shell command](#commands) to list the connection parameters for all nodes: - -{% include_cached copy-clipboard.html %} -~~~ sql -> \demo ls -~~~ - -~~~ -node 1: - (webui) http://127.0.0.1:8080/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26257?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26257 - -node 2: - (webui) http://127.0.0.1:8081/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26258?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26258 - -node 3: - (webui) http://127.0.0.1:8082/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26259?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26259 -~~~ - -### Commands - -- [General](#general) -- [Demo-specific](#demo-specific) - -#### General - -{% include {{ page.version.version }}/sql/shell-commands.md %} - -#### Demo-specific - -`cockroach demo` offers the following additional shell commands. Note that these commands are **experimental** and their interface and output are subject to change. - -Command | Usage ---------|------ -`\demo ls` | List the demo nodes and their connection URLs. -`\demo add region=,zone=` | Add a node to a single-region or multi-region demo cluster. [See an example](#add-shut-down-and-restart-nodes-in-a-multi-node-demo-cluster). -`\demo shutdown ` | Shuts down a node in a multi-node demo cluster.

This command simulates stopping a node that can be restarted. [See an example](#add-shut-down-and-restart-nodes-in-a-multi-node-demo-cluster). -`\demo restart ` | Restarts a node in a multi-node demo cluster. [See an example](#add-shut-down-and-restart-nodes-in-a-multi-node-demo-cluster). -`\demo decommission ` | Decommissions a node in a multi-node demo cluster.

This command simulates [decommissioning a node](node-shutdown.html?filters=decommission). -`\demo recommission ` | Recommissions a decommissioned node in a multi-node demo cluster. - -### Client-side options - -{% include {{ page.version.version }}/sql/shell-options.md %} - -### Help - -{% include {{ page.version.version }}/sql/shell-help.md %} - -### Shortcuts - -{% include {{ page.version.version }}/sql/shell-shortcuts.md %} - -### macOS terminal configuration - -{% include {{ page.version.version }}/sql/macos-terminal-configuration.md %} - -## Diagnostics reporting - -By default, `cockroach demo` shares anonymous usage details with Cockroach Labs. To opt out, set the [`diagnostics.reporting.enabled`](diagnostics-reporting.html#after-cluster-initialization) [cluster setting](cluster-settings.html) to `false`. You can also opt out by setting the [`COCKROACH_SKIP_ENABLING_DIAGNOSTIC_REPORTING`](diagnostics-reporting.html#at-cluster-initialization) environment variable to `false` before running `cockroach demo`. - -## Examples - -In these examples, we demonstrate how to start a shell with `cockroach demo`. For more SQL shell features, see the [`cockroach sql` examples](cockroach-sql.html#examples). - -### Start a single-node demo cluster - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo -~~~ - -By default, `cockroach demo` loads the `movr` dataset in to the demo cluster: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW TABLES; -~~~ - -~~~ - schema_name | table_name | type | owner | estimated_row_count | locality ---------------+----------------------------+-------+-------+---------------------+----------- - public | promo_codes | table | demo | 1000 | NULL - public | rides | table | demo | 500 | NULL - public | user_promo_codes | table | demo | 0 | NULL - public | users | table | demo | 50 | NULL - public | vehicle_location_histories | table | demo | 1000 | NULL - public | vehicles | table | demo | 15 | NULL -(6 rows) -~~~ - -You can query the pre-loaded data: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT name FROM users LIMIT 10; -~~~ - -~~~ - name ------------------------ - Tyler Dalton - Dillon Martin - Deborah Carson - David Stanton - Maria Weber - Brian Campbell - Carl Mcguire - Jennifer Sanders - Cindy Medina - Daniel Hernandez MD -(10 rows) -~~~ - -You can also create and query new tables: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE drivers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - city STRING NOT NULL, - name STRING, - dl STRING UNIQUE, - address STRING -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO drivers (city, name) VALUES ('new york', 'Catherine Nelson'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM drivers; -~~~ - -~~~ - id | city | name | dl | address ----------------------------------------+----------+------------------+------+---------- - 4d363104-2c48-43b5-aa1e-955b81415c7d | new york | Catherine Nelson | NULL | NULL -(1 row) -~~~ - -### Start a multi-node demo cluster - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo --nodes=3 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> \demo ls -~~~ - -~~~ -node 1: - (webui) http://127.0.0.1:8080/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26257?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26257 - -node 2: - (webui) http://127.0.0.1:8081/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26258?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26258 - -node 3: - (webui) http://127.0.0.1:8082/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26259?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26259 -~~~ - -### Load a sample dataset into a demo cluster - -By default, `cockroach demo` loads the `movr` dataset in to the demo cluster. To pre-load any of the other [available datasets](#datasets) using `cockroach demo `. For example, to load the `ycsb` dataset: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo ycsb -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW TABLES; -~~~ - -~~~ - schema_name | table_name | type | owner | estimated_row_count | locality ---------------+------------+-------+-------+---------------------+----------- - public | usertable | table | demo | 0 | NULL -(1 row) -~~~ - -### Run load against a demo cluster - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo --with-load -~~~ - -This command starts a demo cluster with the `movr` database preloaded and then inserts rows into each table in the `movr` database. You can monitor the workload progress on the [DB Console](ui-overview-dashboard.html#sql-statements). - -When running a multi-node demo cluster, load is balanced across all nodes. - -### Execute SQL from the command-line against a demo cluster - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo \ ---execute="CREATE TABLE drivers ( - id UUID DEFAULT gen_random_uuid(), - city STRING NOT NULL, - name STRING, - dl STRING UNIQUE, - address STRING, - CONSTRAINT primary_key PRIMARY KEY (city ASC, id ASC) -);" \ ---execute="INSERT INTO drivers (city, name) VALUES ('new york', 'Catherine Nelson');" \ ---execute="SELECT * FROM drivers;" -~~~ - -~~~ -CREATE TABLE -INSERT 1 - id | city | name | dl | address ----------------------------------------+----------+------------------+------+---------- - dd6afc4c-bf31-455e-bb6d-bfb8f18ad6cc | new york | Catherine Nelson | NULL | NULL -(1 row) -~~~ - -### Connect an additional SQL client to the demo cluster - -In addition to the interactive SQL shell that opens when you run `cockroach demo`, you can use the [connection parameters](#connection-parameters) in the welcome text to connect additional SQL clients to the cluster. - -First, use `\demo ls` to list the connection parameters for each node in the demo cluster: - -{% include_cached copy-clipboard.html %} -~~~ sql -> \demo ls -~~~ - -~~~ -node 1: - (webui) http://127.0.0.1:8080/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26257?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26257 - -node 2: - (webui) http://127.0.0.1:8081/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26258?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26258 - -node 3: - (webui) http://127.0.0.1:8082/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26259?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26259 -~~~ - -Then open a new terminal and run [`cockroach sql`](cockroach-sql.html) with the `--url` flag set to the `sql` connection URL of the node to which you want to connect: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --url='postgres://demo:demo53628@127.0.0.1:26259?sslmode=require' -~~~ - -You can also use this URL to connect an application to the demo cluster as the `demo` user. - -### Start a multi-region demo cluster - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo --global --nodes 9 -~~~ - -This command starts a 9-node demo cluster with the `movr` database preloaded and region and zone localities set at the cluster level. - -{{site.data.alerts.callout_info}} -The `--global` flag is an experimental feature of `cockroach demo`. The interface and output are subject to change. -{{site.data.alerts.end}} - -For a tutorial that uses a demo cluster to demonstrate CockroachDB's multi-region capabilities, see [Low Latency Reads and Writes in a Multi-Region Cluster](demo-low-latency-multi-region-deployment.html). - -### Add, shut down, and restart nodes in a multi-node demo cluster - -In a multi-node demo cluster, you can use `\demo` [shell commands](#commands) to add, shut down, restart, decommission, and recommission individual nodes. - -{{site.data.alerts.callout_info}} -{% include feature-phases/preview.md %} -{{site.data.alerts.end}} - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo --nodes=9 -~~~ - -{{site.data.alerts.callout_info}} -`cockroach demo` does not support the `\demo add` and `\demo shutdown` commands in demo clusters started with the `--global` flag. -{{site.data.alerts.end}} - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW REGIONS FROM CLUSTER; -~~~ - -~~~ - region | zones ----------------+---------- - europe-west1 | {b,c,d} - us-east1 | {b,c,d} - us-west1 | {a,b,c} -(3 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> \demo ls -~~~ - -~~~ -node 1: - (webui) http://127.0.0.1:8080/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26257?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26257 - -node 2: - (webui) http://127.0.0.1:8081/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26258?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26258 - -node 3: - (webui) http://127.0.0.1:8082/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26259?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26259 - -node 4: - (webui) http://127.0.0.1:8083/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26260?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26260 - -node 5: - (webui) http://127.0.0.1:8084/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26261?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26261 - -node 6: - (webui) http://127.0.0.1:8085/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26262?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26262 - -node 7: - (webui) http://127.0.0.1:8086/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26263?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26263 - -node 8: - (webui) http://127.0.0.1:8087/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26264?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26264 - -node 9: - (webui) http://127.0.0.1:8088/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26265?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26265 -~~~ - -You can shut down and restart any node by node id. For example, to shut down the 3rd node and then restart it: - -{% include_cached copy-clipboard.html %} -~~~ sql -> \demo shutdown 3 -~~~ - -~~~ -node 3 has been shutdown -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> \demo restart 3 -~~~ - -~~~ -node 3 has been restarted -~~~ - -You can also decommission the 3rd node and then recommission it: - -{% include_cached copy-clipboard.html %} -~~~ sql -> \demo decommission 3 -~~~ - -~~~ -node 3 has been decommissioned -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> \demo recommission 3 -~~~ - -~~~ -node 3 has been recommissioned -~~~ - -To add a new node to the cluster: - -{% include_cached copy-clipboard.html %} -~~~ sql -> \demo add region=us-central1,zone=a -~~~ - -~~~ -node 10 has been added with locality "region=us-central1,zone=a" -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW REGIONS FROM CLUSTER; -~~~ - -~~~ - region | zones ----------------+---------- - europe-west1 | {b,c,d} - us-central1 | {a} - us-east1 | {b,c,d} - us-west1 | {a,b,c} -(4 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> \demo ls -~~~ - -~~~ -node 1: - (webui) http://127.0.0.1:8080/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26257?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26257 - -node 2: - (webui) http://127.0.0.1:8081/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26258?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26258 - -... - -node 10: - (webui) http://127.0.0.1:8089/demologin?password=demo76950&username=demo - (sql) postgres://demo:demo76950@127.0.0.1:26266?sslmode=require - (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26266 -~~~ - -### Try your own scenario - -In addition to using one of the [pre-loaded dataset](#datasets), you can create your own database (e.g., [`CREATE DATABASE ;`](create-database.html)), or use the empty `defaultdb` database (e.g., [`SET DATABASE defaultdb;`](set-vars.html)) to test our your own scenario involving any CockroachDB SQL features you are interested in. - -## See also - -- [`cockroach sql`](cockroach-sql.html) -- [`cockroach workload`](cockroach-workload.html) -- [`cockroach` Commands Overview](cockroach-commands.html) -- [SQL Statements](sql-statements.html) -- [Learn CockroachDB SQL](learn-cockroachdb-sql.html) -- [MovR: Vehicle-Sharing App](movr.html) diff --git a/src/current/v22.1/cockroach-gen.md b/src/current/v22.1/cockroach-gen.md deleted file mode 100644 index 22ad59d4d79..00000000000 --- a/src/current/v22.1/cockroach-gen.md +++ /dev/null @@ -1,394 +0,0 @@ ---- -title: cockroach gen -summary: Use cockroach gen to generate command-line interface utlities, such as man pages, and example data. -toc: true -key: generate-cockroachdb-resources.html -docs_area: reference.cli ---- - -The `cockroach gen` [command](cockroach-commands.html) can generate command-line interface (CLI) utilities ([`man` pages](https://en.wikipedia.org/wiki/Man_page) and a `bash` autocompletion script), example SQL data suitable to populate test databases, and an HAProxy configuration file for load balancing a running cluster. - -## Subcommands - -Subcommand | Usage ------------|------ -`man` | Generate man pages for CockroachDB. -`autocomplete` | Generate `bash` or `zsh` autocompletion script for CockroachDB.

**Default:** `bash` -`example-data` | Generate example SQL datasets. You can also use the [`cockroach workload`](cockroach-workload.html) command to generate these sample datasets in a persistent cluster and the [`cockroach demo `](cockroach-demo.html) command to generate these datasets in a temporary, in-memory cluster. -`haproxy` | Generate an HAProxy config file for a running CockroachDB cluster. The node addresses included in the config are those advertised by the nodes. Make sure hostnames are resolvable and IP addresses are routable from HAProxy.

[Decommissioned nodes](node-shutdown.html?filters=decommission) are excluded from the config file. - -## Synopsis - -Generate man pages: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach gen man -~~~ - -Generate bash autocompletion script: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach gen autocomplete -~~~ - -Generate example SQL data: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach gen example-data intro | cockroach sql -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach gen example-data startrek | cockroach sql -~~~ - -Generate an HAProxy config file for a running cluster: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach gen haproxy -~~~ - -View help: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach gen --help -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach gen man --help -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach gen autocomplete --help -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach gen example-data --help -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach gen haproxy --help -~~~ - -## Flags - -The `gen` subcommands supports the following [general-use](#general), [logging](#logging), and [client connection](#client-connection) flags. - -### General - -#### `man` - -Flag | Description ------|----------- -`--path` | The path where man pages will be generated.

**Default:** `man/man1` under the current directory - -#### `autocomplete` - -Flag | Description ------|----------- -`--out` | The path where the autocomplete file will be generated.

**Default:** `cockroach.bash` in the current directory - -#### `example-data` - -No flags are supported. See the [Generate Example Data](#generate-example-data) example for guidance. - -#### `haproxy` - -Flag | Description ------|------------ -`--host` | The server host and port number to connect to. This can be the address of any node in the cluster.

**Env Variable:** `COCKROACH_HOST`
**Default:** `localhost:26257` -`--port`
`-p` | The server port to connect to. Note: The port number can also be specified via `--host`.

**Env Variable:** `COCKROACH_PORT`
**Default:** `26257` -`--insecure` | Use an insecure connection.

**Env Variable:** `COCKROACH_INSECURE`
**Default:** `false` -`--certs-dir` | The path to the [certificate directory](cockroach-cert.html) containing the CA and client certificates and client key.

**Env Variable:** `COCKROACH_CERTS_DIR`
**Default:** `${HOME}/.cockroach-certs/` -`--url` | A [connection URL](connection-parameters.html#connect-using-a-url) to use instead of the other arguments.

**Env Variable:** `COCKROACH_URL`
**Default:** no URL -`--out` | The path where the `haproxy.cfg` file will be generated. If an `haproxy.cfg` file already exists in the directory, it will be overwritten.

**Default:** `haproxy.cfg` in the current directory -`--locality` | If nodes were started with [locality](cockroach-start.html#locality) details, you can use the `--locality` flag here to filter the nodes included in the HAProxy config file, specifying the explicit locality tier(s) or a regular expression to match against. This is useful in cases where you want specific instances of HAProxy to route to specific nodes. See the [Generate an HAProxy configuration file](#generate-an-haproxy-config-file) example for more details. - -### Logging - -{% include {{ page.version.version }}/misc/logging-defaults.md %} - -### Client Connection - -#### `haproxy` - -Flag | Description ------|------------ -`--cluster-name` | The cluster name to use to verify the cluster's identity. If the cluster has a cluster name, you must include this flag. For more information, see [`cockroach start`](cockroach-start.html#general). -`--disable-cluster-name-verification` | Disables the cluster name check for this command. This flag must be paired with `--cluster-name`. For more information, see [`cockroach start`](cockroach-start.html#general). - -## Examples - -### Generate `man` pages - -Generate man pages: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach gen man -~~~ - -Move the man pages to the man directory: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ sudo mv man/man1/* /usr/share/man/man1 -~~~ - -Access man pages: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ man cockroach -~~~ - -### Generate a `bash` autocompletion script - -Generate bash autocompletion script: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach gen autocomplete -~~~ - -Add the script to your `.bashrc` and `.bash_profle`: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ printf "\n\n#cockroach bash autocomplete\nsource 'cockroach.bash'" >> ~/.bashrc -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ printf "\n\n#cockroach bash autocomplete\nsource 'cockroach.bash'" >> ~/.bash_profile -~~~ - -You can now use `tab` to autocomplete `cockroach` commands. - -### Generate example data - -{{site.data.alerts.callout_success}} -You can also use the [`cockroach workload`](cockroach-workload.html) command to generate these sample datasets in a persistent cluster and the [`cockroach demo `](cockroach-demo.html) command to generate these datasets in a temporary, in-memory cluster. -{{site.data.alerts.end}} - -To test out CockroachDB, you can generate an example `startrek` database, which contains 2 tables, `episodes` and `quotes`. - -First, start up [a demo cluster](cockroach-demo.html): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo -~~~ - -Then, pipe the output from `cockroach gen` to [the URL to the demo cluster](cockroach-demo.html#connection-parameters): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach gen example-data startrek | cockroach sql --url='postgres://demo:demo11762@127.0.0.1:26257?sslmode=require' -~~~ - -~~~ -CREATE DATABASE -SET -DROP TABLE -DROP TABLE -CREATE TABLE -INSERT 1 -... -CREATE TABLE -INSERT 1 -... -~~~ - -Open a [SQL shell](cockroach-sql.html) to view it: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --url='postgres://demo:demo11762@127.0.0.1:26257?sslmode=require' -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM startrek; -~~~ -~~~ - schema_name | table_name | type | estimated_row_count ---------------+------------+-------+---------------------- - public | episodes | table | 79 - public | quotes | table | 200 -(2 rows) -~~~ - -You can also generate an example `intro` database, which contains 1 table, `mytable`, with a hidden message: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach gen example-data intro | cockroach sql --url='postgres://demo:demo11762@127.0.0.1:26257?sslmode=require' -~~~ - -~~~ -CREATE DATABASE -SET -DROP TABLE -CREATE TABLE -INSERT 1 -INSERT 1 -INSERT 1 -INSERT 1 -... -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -# Launch the built-in SQL client to view it: -$ cockroach sql --url='postgres://demo:demo11762@127.0.0.1:26257?sslmode=require' -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM intro; -~~~ - -~~~ - schema_name | table_name | type | estimated_row_count ---------------+------------+-------+---------------------- - public | mytable | table | 42 -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM intro.mytable WHERE (l % 2) = 0; -~~~ - -~~~ - l | v ------+------------------------------------------------------- - 0 | !__aaawwmqmqmwwwaas,,_ .__aaawwwmqmqmwwaaa,, - 2 | !"VT?!"""^~~^"""??T$Wmqaa,_auqmWBT?!"""^~~^^""??YV^ - 4 | ! "?##mW##?"- - 6 | ! C O N G R A T S _am#Z??A#ma, Y - 8 | ! _ummY" "9#ma, A - 10 | ! vm#Z( )Xmms Y - 12 | ! .j####mmm#####mm#m##6. - 14 | ! W O W ! jmm###mm######m#mmm##6 - 16 | ! ]#me*Xm#m#mm##m#m##SX##c - 18 | ! dm#||+*$##m#mm#m#Svvn##m - 20 | ! :mmE=|+||S##m##m#1nvnnX##; A - 22 | ! :m#h+|+++=Xmm#m#1nvnnvdmm; M - 24 | ! Y $#m>+|+|||##m#1nvnnnnmm# A - 26 | ! O ]##z+|+|+|3#mEnnnnvnd##f Z - 28 | ! U D 4##c|+|+|]m#kvnvnno##P E - 30 | ! I 4#ma+|++]mmhvnnvq##P` ! - 32 | ! D I ?$#q%+|dmmmvnnm##! - 34 | ! T -4##wu#mm#pw##7' - 36 | ! -?$##m####Y' - 38 | ! !! "Y##Y"- - 40 | ! -(21 rows) -~~~ - -### Generate an HAProxy config file - -[HAProxy](http://www.haproxy.org/) is one of the most popular open-source TCP load balancers, and CockroachDB includes a built-in command for generating a configuration file that is preset to work with your running cluster. - -
- - -

- -
-To generate an HAProxy config file for an entire secure cluster, run the `cockroach gen haproxy` command, specifying the location of [certificate directory](cockroach-cert.html) and the address of any instance running a CockroachDB node: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach gen haproxy \ ---certs-dir= \ ---host=
-~~~ - -To limit the HAProxy config file to nodes matching specific ["localities"](cockroach-start.html#locality), use the `--localities` flag, specifying the explicit locality tier(s) or a regular expression to match against: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach gen haproxy \ ---certs-dir= \ ---host=
---locality=region=us.* -~~~ -
- -
-To generate an HAProxy config file for an entire insecure cluster, run the `cockroach gen haproxy` command, specifying the address of any instance running a CockroachDB node: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach gen haproxy \ ---insecure \ ---host=
-~~~ - -To limit the HAProxy config file to nodes matching specific ["localities"](cockroach-start.html#locality), use the `--localities` flag, specifying the explicit locality tier(s) or a regular expression to match against: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach gen haproxy \ ---insecure \ ---host=
---locality=region=us.* -~~~ -
- -By default, the generated configuration file is called `haproxy.cfg` and looks as follows, with the `server` addresses pre-populated correctly: - -~~~ -global - maxconn 4096 - -defaults - mode tcp - # Timeout values should be configured for your specific use. - # See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect - timeout connect 10s - timeout client 1m - timeout server 1m - # TCP keep-alive on client side. Server already enables them. - option clitcpka - -listen psql - bind :26257 - mode tcp - balance roundrobin - option httpchk GET /health?ready=1 - server cockroach1 :26257 check port 8080 - server cockroach2 :26257 check port 8080 - server cockroach3 :26257 check port 8080 -~~~ - -The file is preset with the minimal [configurations](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html) needed to work with your running cluster: - -Field | Description -------|------------ -`timeout connect`
`timeout client`
`timeout server` | Timeout values that should be suitable for most deployments. -`bind` | The port that HAProxy listens on. This is the port clients will connect to and thus needs to be allowed by your network configuration.

This tutorial assumes HAProxy is running on a separate machine from CockroachDB nodes. If you run HAProxy on the same machine as a node (not recommended), you'll need to change this port, as `26257` is likely already being used by the CockroachDB node. -`balance` | The balancing algorithm. This is set to `roundrobin` to ensure that connections get rotated amongst nodes (connection 1 on node 1, connection 2 on node 2, etc.). Check the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-balance) for details about this and other balancing algorithms. -`option httpchk` | The HTTP endpoint that HAProxy uses to check node health. [`/health?ready=1`](monitoring-and-alerting.html#health-ready-1) ensures that HAProxy doesn't direct traffic to nodes that are live but not ready to receive requests. -`server` | For each included node, this field specifies the address the node advertises to other nodes in the cluster, i.e., the addressed pass in the [`--advertise-addr` flag](cockroach-start.html#networking) on node startup. Make sure hostnames are resolvable and IP addresses are routable from HAProxy. - -{{site.data.alerts.callout_info}} -For full details on these and other configuration settings, see the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html). -{{site.data.alerts.end}} - -## See also - -- [`cockroach` Commands Overview](cockroach-commands.html) -- [Deploy CockroachDB On-Premises](deploy-cockroachdb-on-premises.html) (using HAProxy for load balancing) diff --git a/src/current/v22.1/cockroach-import.md b/src/current/v22.1/cockroach-import.md deleted file mode 100644 index 05dbcd8c313..00000000000 --- a/src/current/v22.1/cockroach-import.md +++ /dev/null @@ -1,114 +0,0 @@ ---- -title: cockroach import -summary: The cockroach import command imports a database or table from a local dump file into a running cluster. -toc: true -docs_area: reference.cli ---- - - The `cockroach import` [command](cockroach-commands.html) imports a database or table from a local dump file into a running cluster. This command [uploads a userfile](cockroach-userfile-upload.html), imports its data, then [deletes the userfile](cockroach-userfile-delete.html). `PGDUMP` and `MYSQLDUMP` file formats are currently supported. - -{{site.data.alerts.callout_info}} -We recommend using `cockroach import` for quick imports from your client (about 15MB or smaller). For larger imports, use the [IMPORT](import.html) statement. -{{site.data.alerts.end}} - -## Required privileges - -The user must have `CREATE` [privileges](security-reference/authorization.html#managing-privileges) on `defaultdb`. - -## Synopsis - -Import a database: - -~~~ shell -$ cockroach import db -~~~ - -Import a table: - -~~~ shell -$ cockroach import table -~~~ - -View help: - -~~~ shell -$ cockroach import --help -~~~ - -## Supported Formats - -- [`pgdump`](migrate-from-postgres.html#step-1-dump-the-postgresql-database) -- [`mysqldump`](migrate-from-mysql.html#step-1-dump-the-mysql-database) - -## Flags - - Flag | Description ------------------+----------------------------------------------------- -`--certs-dir` | The path to the [certificate directory](cockroach-cert.html) containing the CA and client certificates and client key.

**Env Variable:** `COCKROACH_CERTS_DIR`
**Default:** `${HOME}/.cockroach-certs/` -`--insecure` | Use an insecure connection.

**Env Variable:** `COCKROACH_INSECURE`
**Default:** `false` -`--user`
`-u` | The [SQL user](create-user.html) that will own the client session.

**Env Variable:** `COCKROACH_USER`
**Default:** `root` -`--ignore-unsupported-statements` | Ignore statements that are unsupported during an import from a PGDUMP file.
**Default:** `false` -`--log-ignored-statements` | Log statements that are ignored during an import from a PGDUMP file to the specified destination (i.e., [cloud storage](use-cloud-storage-for-bulk-operations.html) or [userfile storage](use-userfile-for-bulk-operations.html). -`--row-limit=` | The number of rows to import for each table during a PGDUMP or MYSQLDUMP import.
This can be used to check schema and data correctness without running the entire import.
**Default:** `0` - -## Examples - -### Import a database - -To import a database from a local file: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach import db mysqldump /Users/maxroach/Desktop/test-db.sql --certs-dir=certs -~~~ - -~~~ -successfully imported mysqldump file /Users/maxroach/Desktop/test-db.sql -~~~ - -### Import a table - -To import a table from a local file: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach import table test_table pgdump /Users/maxroach/Desktop/test-db.sql --certs-dir=certs -~~~ - -~~~ -successfully imported table test_table from pgdump file /Users/maxroach/Desktop/test-db.sql -~~~ - -### Import a database with unsupported SQL syntax and log all unsupported statements - - To import a database from a `PGDUMP` file that contains unsupported SQL syntax and log the ignored statements to a [userfile](use-userfile-for-bulk-operations.html): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach import db pgdump /Users/maxroach/Desktop/test-db.sql --certs-dir=certs --ignore-unsupported-statements=true --log-ignored-statements='userfile://defaultdb.public.userfiles_root/unsupported-statements.log' -~~~ - -~~~ -successfully imported table test_table from pgdump file /Users/maxroach/Desktop/test-db.sql -~~~ - -### Import a limited number of rows from a dump file - - To limit the number of rows imported from a dump file: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach import table test_table pgdump /Users/maxroach/Desktop/test-db.sql --certs-dir=certs --row-limit='50' -~~~ - -~~~ -successfully imported table test_table from pgdump file /Users/maxroach/Desktop/test-db.sql -~~~ - -## See also - -- [`cockroach` Commands Overview](cockroach-commands.html) -- [`IMPORT`](import.html) -- [`IMPORT INTO`](import-into.html) -- [Migrate from PostgreSQL](migrate-from-postgres.html) -- [Migrate from MySQL](migrate-from-mysql.html) diff --git a/src/current/v22.1/cockroach-init.md b/src/current/v22.1/cockroach-init.md deleted file mode 100644 index 34ec2438db7..00000000000 --- a/src/current/v22.1/cockroach-init.md +++ /dev/null @@ -1,131 +0,0 @@ ---- -title: cockroach init -summary: Perform a one-time-only initialization of a CockroachDB cluster. -toc: true -key: initialize-a-cluster.html -docs_area: reference.cli ---- - -This page explains the `cockroach init` [command](cockroach-commands.html), which you use to perform a one-time initialization of a new multi-node cluster. For a full tutorial of the cluster startup and initialization process, see one of the [Manual Deployment](manual-deployment.html) tutorials. - -{{site.data.alerts.callout_info}} -When starting a single-node cluster with [`cockroach start-single-node`](cockroach-start-single-node.html), you do not need to use the `cockroach init` command. -{{site.data.alerts.end}} - -## Synopsis - -Perform a one-time initialization of a cluster: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach init -~~~ - -View help: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach init --help -~~~ - -## Flags - -The `cockroach init` command supports the following [client connection](#client-connection) and [logging](#logging) flags. - -{{site.data.alerts.callout_info}} -`cockroach init` must target one of the nodes that was listed with [`--join`](cockroach-start.html#networking) when starting the cluster. Otherwise, the command will not initialize the cluster correctly. -{{site.data.alerts.end}} - -### Client connection - -{% include {{ page.version.version }}/sql/connection-parameters.md %} -`--cluster-name` | The cluster name to use to verify the cluster's identity. If the cluster has a cluster name, you must include this flag. For more information, see [`cockroach start`](cockroach-start.html#general). -`--disable-cluster-name-verification` | Disables the cluster name check for this command. This flag must be paired with `--cluster-name`. For more information, see [`cockroach start`](cockroach-start.html#general). - -See [Client Connection Parameters](connection-parameters.html) for details. - -### Logging - -{% include {{ page.version.version }}/misc/logging-defaults.md %} - -## Examples - -Usage of `cockroach init` assumes that nodes have already been started with [`cockroach start`](cockroach-start.html) and are waiting to be initialized as a new cluster. For a more detailed tutorial, see one of the [Manual Deployment](manual-deployment.html) tutorials. - -### Initialize a Cluster on a Node's Machine - -
- - -
- -
-1. SSH to the machine where the node has been started. This must be a node that was listed with [`--join`](cockroach-start.html#networking) when starting the cluster. - -2. Make sure the `client.root.crt` and `client.root.key` files for the `root` user are on the machine. - -3. Run the `cockroach init` command with the `--certs-dir` flag set to the directory containing the `ca.crt` file and the files for the `root` user, and with the `--host` flag set to the address of the current node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach init --certs-dir=certs --host=
- ~~~ - - At this point, all the nodes complete startup and print helpful details to the [standard output](cockroach-start.html#standard-output), such as the CockroachDB version, the URL for the DB Console, and the SQL URL for clients. -
- -
-1. SSH to the machine where the node has been started. This must be a node that was listed with [`--join`](cockroach-start.html#networking) when starting the cluster. - -2. Run the `cockroach init` command with the `--host` flag set to the address of the current node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach init --insecure --host=
- ~~~ - - At this point, all the nodes complete startup and print helpful details to the [standard output](cockroach-start.html#standard-output), such as the CockroachDB version, the URL for the DB Console, and the SQL URL for clients. -
- -### Initialize a cluster from another machine - -
- - -
- -
-1. [Install the `cockroach` binary](install-cockroachdb.html) on a machine separate from the node. - -2. Create a `certs` directory and copy the CA certificate and the client certificate and key for the `root` user into the directory. - -3. Run the `cockroach init` command with the `--certs-dir` flag set to the directory containing the `ca.crt` file and the files for the `root` user, and with the `--host` flag set to the address of the node. This must be a node that was listed with [`--join`](cockroach-start.html#networking) when starting the cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach init --certs-dir=certs --host=
- ~~~ - - At this point, all the nodes complete startup and print helpful details to the [standard output](cockroach-start.html#standard-output), such as the CockroachDB version, the URL for the DB Console, and the SQL URL for clients. -
- -
-1. [Install the `cockroach` binary](install-cockroachdb.html) on a machine separate from the node. - -2. Run the `cockroach init` command with the `--host` flag set to the address of the node. This must be a node that was listed with [`--join`](cockroach-start.html#networking) when starting the cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach init --insecure --host=
- ~~~ - - At this point, all the nodes complete startup and print helpful details to the [standard output](cockroach-start.html#standard-output), such as the CockroachDB version, the URL for the DB Console, and the SQL URL for clients. -
- -## See also - -- [Manual Deployment](manual-deployment.html) -- [Orchestrated Deployment](kubernetes-overview.html) -- [Local Deployment](start-a-local-cluster.html) -- [`cockroach start`](cockroach-start.html) -- [`cockroach` Commands Overview](cockroach-commands.html) diff --git a/src/current/v22.1/cockroach-node.md b/src/current/v22.1/cockroach-node.md deleted file mode 100644 index 3270cfd9857..00000000000 --- a/src/current/v22.1/cockroach-node.md +++ /dev/null @@ -1,327 +0,0 @@ ---- -title: cockroach node -summary: To view details for each node in the cluster, use the cockroach node command with the appropriate subcommands and flags. -toc: true -key: view-node-details.html -docs_area: reference.cli ---- - -To view details for each node in the cluster, use the `cockroach node` [command](cockroach-commands.html) with the appropriate subcommands and flags. - -The `cockroach node` command is also used to stop or remove nodes from the cluster. For details, see [Node Shutdown](node-shutdown.html). - -## Subcommands - -Subcommand | Usage ------------|------ -`ls` | List the ID of each node in the cluster, excluding those that have been decommissioned and are offline. -`status` | View the status of one or all nodes, excluding nodes that have been decommissioned and taken offline. Depending on flags used, this can include details about range/replicas, disk usage, and decommissioning progress. -`decommission` | Decommission nodes for removal from the cluster. For details, see [Node Shutdown](node-shutdown.html?filters=decommission). -`recommission` | Recommission nodes that are decommissioning. If the decommissioning node has already reached the [draining stage](node-shutdown.html?filters=decommission#draining), you may need to restart the node after it is recommissioned. For details, see [Node Shutdown](node-shutdown.html#recommission-nodes). -`drain` | Drain nodes in preparation for process termination. Draining always occurs when sending a termination signal or decommissioning a node. The `drain` subcommand is used to drain nodes without also decommissioning or shutting them down. For details, see [Node Shutdown](node-shutdown.html). - -## Synopsis - -List the IDs of active and inactive nodes: - -~~~ shell -$ cockroach node ls -~~~ - -Show status details for active and inactive nodes: - -~~~ shell -$ cockroach node status -~~~ - -Show status and range/replica details for active and inactive nodes: - -~~~ shell -$ cockroach node status --ranges -~~~ - -Show status and disk usage details for active and inactive nodes: - -~~~ shell -$ cockroach node status --stats -~~~ - -Show status and decommissioning details for active and inactive nodes: - -~~~ shell -$ cockroach node status --decommission -~~~ - -Show complete status details for active and inactive nodes: - -~~~ shell -$ cockroach node status --all -~~~ - -Show status details for a specific node: - -~~~ shell -$ cockroach node status -~~~ - -Decommission nodes: - -~~~ shell -$ cockroach node decommission -~~~ - -Recommission nodes: - -~~~ shell -$ cockroach node recommission -~~~ - -Drain nodes: - -~~~ shell -$ cockroach node drain -~~~ - -View help: - -~~~ shell -$ cockroach node --help -~~~ -~~~ shell -$ cockroach node --help -~~~ - -## Flags - -All `node` subcommands support the following [general-use](#general) and [logging](#logging) flags. - -### General - -Flag | Description ------|------------ -`--format` | How to display table rows printed to the standard output. Possible values: `tsv`, `csv`, `table`, `records`, `sql`, `html`.

**Default:** `tsv` - -The `node ls` subcommand also supports the following general flags: - -Flag | Description ------|------------ -`--timeout` | Set the duration of time that the subcommand is allowed to run before it returns an error and prints partial information. The timeout is specified with a suffix of `s` for seconds, `m` for minutes, and `h` for hours. If this flag is not set, the subcommand may hang. - -The `node status` subcommand also supports the following general flags: - -Flag | Description ------|------------ -`--all` | Show all node details. -`--decommission` | Show node decommissioning details. -`--ranges` | Show node details for ranges and replicas. -`--stats` | Show node disk usage details. -`--timeout` | Set the duration of time that the subcommand is allowed to run before it returns an error and prints partial information. The timeout is specified with a suffix of `s` for seconds, `m` for minutes, and `h` for hours. If this flag is not set, the subcommand may hang. - -The `node decommission` subcommand also supports the following general flags: - -Flag | Description ------|------------ -`--wait` | When to return to the client. Possible values: `all`, `none`.

If `all`, the command returns to the client only after all replicas on all specified nodes have been transferred to other nodes. If any specified nodes are offline, the command will not return to the client until those nodes are back online.

If `none`, the command does not wait for the decommissioning process to complete; it returns to the client after starting the decommissioning process on all specified nodes that are online. Any specified nodes that are offline will automatically be marked as decommissioning; if they come back online, the cluster will recognize this status and will not rebalance data to the nodes.

**Default:** `all` -`--self` | **Deprecated.** Instead, specify a node ID explicitly in addition to the `--host` flag. - -The `node drain` subcommand also supports the following general flags: - -Flag | Description ------|------------ -`--drain-wait` | Amount of time to wait for the node to drain before returning to the client. If draining fails to complete within this duration, you must re-initiate the command to continue the drain. A very long drain may indicate an anomaly, and you should manually inspect the server to determine what blocks the drain.

**Default:** `10m` -`--self` | Applies the operation to the node against which the command was run (e.g., via `--host`). - -The `node recommission` subcommand also supports the following general flag: - -Flag | Description ------|------------ -`--self` | Applies the operation to the node against which the command was run (e.g., via `--host`). - -### Client connection - -{% include {{ page.version.version }}/sql/connection-parameters.md %} - -The `node decommission`, `node recommission`, and `node drain` subcommands also support the following client connection flags: - -Flag | Description ------|------------ -`--cluster-name` | The cluster name to use to verify the cluster's identity. If the cluster has a cluster name, you must include this flag. For more information, see [`cockroach start`](cockroach-start.html#general). -`--disable-cluster-name-verification` | Disables the cluster name check for this command. This flag must be paired with `--cluster-name`. For more information, see [`cockroach start`](cockroach-start.html#general). - -See [Client Connection Parameters](connection-parameters.html) for more details. - -### Logging - -{% include {{ page.version.version }}/misc/logging-defaults.md %} - -## Response - -The `cockroach node` subcommands return the following fields for each node. - -### `node ls` - -Field | Description -------|------------ -`id` | The ID of the node. - -### `node status` - -Field | Description -------|------------ -`id` | The ID of the node.

**Required flag:** None -`address` | The address of the node.

**Required flag:** None -`build` | The version of CockroachDB running on the node. If the binary was built from source, this will be the SHA hash of the commit used.

**Required flag:** None -`locality` | The [locality](cockroach-start.html#locality) information specified for the node.

**Required flag:** None -`updated_at` | The date and time when the node last recorded the information displayed in this command's output. When healthy, a new status should be recorded every 10 seconds or so, but when unhealthy this command's stats may be much older.

**Required flag:** None -`started_at` | The date and time when the node was started.

**Required flag:** None -`replicas_leaders` | The number of range replicas on the node that are the Raft leader for their range. See `replicas_leaseholders` below for more details.

**Required flag:** `--ranges` or `--all` -`replicas_leaseholders` | The number of range replicas on the node that are the leaseholder for their range. A "leaseholder" replica handles all read requests for a range and directs write requests to the range's Raft leader (usually the same replica as the leaseholder).

**Required flag:** `--ranges` or `--all` -`ranges` | The number of ranges that have replicas on the node.

**Required flag:** `--ranges` or `--all` -`ranges_unavailable` | The number of unavailable ranges that have replicas on the node.

**Required flag:** `--ranges` or `--all` -`ranges_underreplicated` | The number of underreplicated ranges that have replicas on the node.

**Required flag:** `--ranges` or `--all` -`live_bytes` | The amount of live data used by both applications and the CockroachDB system. This excludes historical and deleted data.

**Required flag:** `--stats` or `--all` -`key_bytes` | The amount of live and non-live data from keys in the key-value storage layer. This does not include data used by the CockroachDB system.

**Required flag:** `--stats` or `--all` -`value_bytes` | The amount of live and non-live data from values in the key-value storage layer. This does not include data used by the CockroachDB system.

**Required flag:** `--stats` or `--all` -`intent_bytes` | The amount of non-live data associated with uncommitted (or recently-committed) transactions.

**Required flag:** `--stats` or `--all` -`system_bytes` | The amount of data used just by the CockroachDB system.

**Required flag:** `--stats` or `--all` -`is_available` | If `true`, the node is currently available.

**Required flag:** None -`is_live` | If `true`, the node is currently live.

For unavailable clusters (with an unresponsive DB Console), running the `node status` command and monitoring the `is_live` field is the only way to identify the live nodes in the cluster. However, you need to run the `node status` command on a live node to identify the other live nodes in an unavailable cluster. Figuring out a live node to run the command is a trial-and-error process, so run the command against each node until you get one that responds.

See [Identify live nodes in an unavailable cluster](#identify-live-nodes-in-an-unavailable-cluster) for more details.

**Required flag:** None -`gossiped_replicas` | The number of replicas on the node that are active members of a range. After the decommissioning process completes, this should be 0.

**Required flag:** `--decommission` or `--all` -`is_decommissioning` | If `true`, the node is either undergoing or has completed the [decommissioning process](node-shutdown.html?filters=decommission#node-shutdown-sequence).

**Required flag:** `--decommission` or `--all` -`is_draining` | If `true`, the node is either undergoing or has completed the [draining process](node-shutdown.html#node-shutdown-sequence).

**Required flag:** `--decommission` or `--all` - -### `node decommission` - -Field | Description -------|------------ -`id` | The ID of the node. -`is_live` | If `true`, the node is live. -`replicas` | The number of replicas on the node that are active members of a range. After the decommissioning process completes, this should be 0. -`is_decommissioning` | If `true`, the node is either undergoing or has completed the [decommissioning process](node-shutdown.html?filters=decommission#node-shutdown-sequence). -`is_draining` | If `true`, the node is either undergoing or has completed the [draining process](node-shutdown.html#node-shutdown-sequence). - -If the rebalancing stalls during decommissioning, replicas that have yet to move are printed to the [SQL shell](cockroach-sql.html) and written to the [`OPS` logging channel](logging-overview.html#logging-channels) with the message `possible decommission stall detected`. [By default](configure-logs.html#default-logging-configuration), the `OPS` channel logs output to a `cockroach.log` file. - -### `node recommission` - -Field | Description -------|------------ -`id` | The ID of the node. -`is_live` | If `true`, the node is live. -`replicas` | The number of replicas on the node that are active members of a range. After the decommissioning process completes, this should be 0. -`is_decommissioning` | If `true`, the node is either undergoing or has completed the [decommissioning process](node-shutdown.html?filters=decommission#node-shutdown-sequence). -`is_draining` | If `true`, the node is either undergoing or has completed the [draining process](node-shutdown.html#node-shutdown-sequence). - -## Examples - -### Setup - -To follow along with the examples, start [an insecure cluster](start-a-local-cluster.html), with [localities](cockroach-start.html#locality) defined. - -### List node IDs - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach node ls --insecure -~~~ - -~~~ - id -+----+ - 1 - 2 - 3 -(3 rows) -~~~ - -### Show the status of a single node - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach node status 1 --host=localhost:26257 --insecure -~~~ - -~~~ - id | address | sql_address | build | started_at | updated_at | locality | is_available | is_live -+----+-----------------+-----------------+-----------------------------------------+----------------------------------+---------------------------------+---------------------+--------------+---------+ - 1 | localhost:26257 | localhost:26257 | v19.2.0-alpha.20190606-2479-gd98e0839dc | 2019-10-01 20:04:54.308502+00:00 | 2019-10-01 20:05:43.85563+00:00 | region=us-east,az=1 | true | true -(1 row) -~~~ - -### Show the status of all nodes - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach node status --host=localhost:26257 --insecure -~~~ - -~~~ - id | address | sql_address | build | started_at | updated_at | locality | is_available | is_live -+----+-----------------+-----------------+-----------------------------------------+----------------------------------+----------------------------------+------------------------+--------------+---------+ - 1 | localhost:26257 | localhost:26257 | v19.2.0-alpha.20190606-2479-gd98e0839dc | 2019-10-01 20:04:54.308502+00:00 | 2019-10-01 20:06:15.356886+00:00 | region=us-east,az=1 | true | true - 2 | localhost:26258 | localhost:26258 | v19.2.0-alpha.20190606-2479-gd98e0839dc | 2019-10-01 20:04:54.551761+00:00 | 2019-10-01 20:06:15.583967+00:00 | region=us-central,az=2 | true | true - 3 | localhost:26259 | localhost:26259 | v19.2.0-alpha.20190606-2479-gd98e0839dc | 2019-10-01 20:04:55.178577+00:00 | 2019-10-01 20:06:16.204549+00:00 | region=us-west,az=3 | true | true -(3 rows) -~~~ - -### Identify live nodes in an unavailable cluster - -The `is_live` and `is_available` columns give you information about a node's current status: - -- `is_live`: The node is up and running -- `is_available`: The node is part of the [quorum](architecture/replication-layer.html#overview). - -Only nodes that are both `is_live: true` and `is_available: true` can participate in the cluster. If either are `false`, check the logs so you can troubleshoot the node(s) in question. - -For example, the following indicates a healthy cluster, where a majority of the nodes are up (`is_live: true`) and a quorum can be reached (`is_available: true` for live nodes): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach node status --host=localhost:26257 --insecure -~~~ - -~~~ - id | address | sql_address | build | started_at | updated_at | locality | is_available | is_live -+----+-----------------+-----------------+-----------------------------------------+----------------------------------+----------------------------------+------------------------+--------------+---------+ - 1 | localhost:26257 | localhost:26257 | v19.2.0-alpha.20190606-2479-gd98e0839dc | 2019-10-01 20:04:54.308502+00:00 | 2019-10-01 20:07:04.857339+00:00 | region=us-east,az=1 | true | true - 2 | localhost:26258 | localhost:26258 | v19.2.0-alpha.20190606-2479-gd98e0839dc | 2019-10-01 20:04:54.551761+00:00 | 2019-10-01 20:06:48.555863+00:00 | region=us-central,az=2 | false | false - 3 | localhost:26259 | localhost:26259 | v19.2.0-alpha.20190606-2479-gd98e0839dc | 2019-10-01 20:04:55.178577+00:00 | 2019-10-01 20:07:01.207697+00:00 | region=us-west,az=3 | true | true -(3 rows) -~~~ - -The following indicates an unhealthy cluster, where a majority of nodes are down (`is_live: false`), and thereby quorum cannot be reached (`is_available: false` for all nodes): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach node status --host=localhost:26257 --insecure -~~~ - -~~~ - id | address | sql_address | build | started_at | updated_at | locality | is_available | is_live -+----+-----------------+-----------------+-----------------------------------------+----------------------------------+----------------------------------+------------------------+--------------+---------+ - 1 | localhost:26257 | localhost:26257 | v19.2.0-alpha.20190606-2479-gd98e0839dc | 2019-10-01 20:04:54.308502+00:00 | 2019-10-01 20:07:37.464249+00:00 | region=us-east,az=1 | false | true - 2 | localhost:26258 | localhost:26258 | v19.2.0-alpha.20190606-2479-gd98e0839dc | 2019-10-01 20:04:54.551761+00:00 | 2019-10-01 20:07:37.464259+00:00 | region=us-central,az=2 | false | false - 3 | localhost:26259 | localhost:26259 | v19.2.0-alpha.20190606-2479-gd98e0839dc | 2019-10-01 20:04:55.178577+00:00 | 2019-10-01 20:07:37.464265+00:00 | region=us-west,az=3 | false | false -(3 rows) -~~~ - -{{site.data.alerts.callout_info}} -You need to run the `node status` command on a live node to identify the other live nodes in an unavailable cluster. Figuring out a live node to run the command is a trial-and-error process, so run the command against each node until you get one that responds. -{{site.data.alerts.end}} - -### Drain nodes - -See [Drain a node manually](node-shutdown.html#drain-a-node-manually). - -### Decommission nodes - -See [Remove nodes](node-shutdown.html?filters=decommission#remove-nodes). - -### Recommission nodes - -See [Recommission Nodes](node-shutdown.html?filters=decommission#recommission-nodes). - -## See also - -- [`cockroach` Commands Overview](cockroach-commands.html) -- [Node Shutdown](node-shutdown.html) diff --git a/src/current/v22.1/cockroach-nodelocal-upload.md b/src/current/v22.1/cockroach-nodelocal-upload.md deleted file mode 100644 index 4d3a5f20e58..00000000000 --- a/src/current/v22.1/cockroach-nodelocal-upload.md +++ /dev/null @@ -1,100 +0,0 @@ ---- -title: cockroach nodelocal upload -summary: The cockroach nodelocal upload command uploads a file to the external IO directory on a node's (the gateway node, by default) local file system. -toc: true -docs_area: reference.cli ---- - - The `cockroach nodelocal upload` [command](cockroach-commands.html) uploads a file to the external IO directory on a node's (the gateway node, by default) local file system. - -This command takes in a source file to upload and a destination filename. It will then use a SQL connection to upload the file to the node's local file system, at `externalIODir/destination/filename`. - -{{site.data.alerts.callout_info}} -The source file is only uploaded to one node, not all of the nodes. -{{site.data.alerts.end}} - -{% include {{ page.version.version }}/misc/userfile.md %} - -## Required privileges - -Only members of the `admin` role can run `cockroach nodelocal upload`. By default, the `root` user belongs to the `admin` role. - -## Considerations - -The [`--external-io`](cockroach-start.html#general) flag on the node you're uploading to **cannot** be set to `disabled`. - -## Synopsis - -Upload a file: - -~~~ shell -$ cockroach nodelocal upload [flags] -~~~ - -View help: - -~~~ shell -$ cockroach nodelocal upload --help -~~~ - -## Flags - - Flag | Description ------------------+----------------------------------------------------- -`--certs-dir` | The path to the [certificate directory](cockroach-cert.html) containing the CA and client certificates and client key.

**Env Variable:** `COCKROACH_CERTS_DIR`
**Default:** `${HOME}/.cockroach-certs/` -`--echo-sql` | Reveal the SQL statements sent implicitly by the command-line utility. -`--host` | The server host and port number to connect to. This can be the address of any node in the cluster.

**Env Variable:** `COCKROACH_HOST`
**Default:** `localhost:26257` -`--insecure` | Use an insecure connection.

**Env Variable:** `COCKROACH_INSECURE`
**Default:** `false` -`--url` | A [connection URL](connection-parameters.html#connect-using-a-url) to use instead of the other arguments.

**Env Variable:** `COCKROACH_URL`
**Default:** no URL -`--user`
`-u` | The [SQL user](create-user.html) that will own the client session.

**Env Variable:** `COCKROACH_USER`
**Default:** `root` - -## Examples - -### Upload a file - -To upload a file to the default node (i.e., the gateway node): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach nodelocal upload ./grants.csv test/grants.csv --certs-dir=certs -~~~ - -~~~ -successfully uploaded to nodelocal://1/test/grants.csv -~~~ - -Then, you can use the file to [`IMPORT`](import.html) or [`IMPORT INTO`](import-into.html) data. - -### Upload a file to a specific node - -To upload a file to a specific node (e.g., node 2), use the `--host` flag: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach nodelocal upload ./grants.csv grants.csv --host=localhost:26259 --insecure -~~~ - -~~~ -successfully uploaded to nodelocal://2/grants.csv -~~~ - -Or, use the `--url` flag: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach nodelocal upload ./grants.csv grants.csv --url=postgresql://root@localhost:26258?sslmode=disable --insecure -~~~ - -~~~ -successfully uploaded to nodelocal://3/grants.csv -~~~ - -Then, you can use the file to [`IMPORT`](import.html) or [`IMPORT INTO`](import-into.html) data. - -## See also - -- [`cockroach` Commands Overview](cockroach-commands.html) -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Import Data](migration-overview.html) -- [`IMPORT`](import.html) -- [`IMPORT INTO`](import-into.html) diff --git a/src/current/v22.1/cockroach-sql-binary.md b/src/current/v22.1/cockroach-sql-binary.md deleted file mode 100644 index b3b54df4b14..00000000000 --- a/src/current/v22.1/cockroach-sql-binary.md +++ /dev/null @@ -1,223 +0,0 @@ ---- -title: cockroach-sql -summary: cockroach-sql is a client for executing SQL statements from an interactive shell or directly from the command line. -toc: true -docs_area: reference.cli ---- - -{% include_cached new-in.html version="v22.1" %} - -The `cockroach-sql` command is a client for executing SQL statements from an interactive shell or directly from the command line. To use this client, run `cockroach-sql` as described below. - -{{site.data.alerts.callout_info}} -`cockroach-sql` is functionally equivalent to the [`cockroach sql` command](cockroach-sql.html). -{{site.data.alerts.end}} - -To exit the interactive shell, enter **\q**, **quit**, **exit**, or **Ctrl+D**. - -The output of `cockroach-sql` when used non-interactively is part of a stable interface, and can be used programmatically, with the exception of informational output lines that begin with the hash symbol (`#`). Informational output can change from release to release, and should not be used programmatically. - -## Install `cockroach-sql` - -
- - - -
- -Download the binary and copy it into your `PATH`. - -
- -{% include_cached copy-clipboard.html %} -~~~ shell -curl https://binaries.cockroachdb.com/cockroach-sql-{{ page.release_info.version }}.linux-amd64.tgz | tar -xz && sudo cp -i cockroach-sql-{{ page.release_info.version }}.linux-amd64/cockroach-sql /usr/local/bin/ && if [ ! -f /usr/local/bin/cockroach ]; then sudo ln -s /usr/local/bin/cockroach-sql /usr/local/bin/cockroach; fi -~~~ - -If you don't have an existing `cockroach` binary in `/usr/local/bin` this will create a symbolic link to `cockroach` so you can use the `cockroach sql` command. -
- -
- -{% include_cached copy-clipboard.html %} -~~~ shell -curl https://binaries.cockroachdb.com/cockroach-sql-{{ page.release_info.version }}.darwin-10.9-amd64.tgz | tar -xz && sudo cp -i cockroach-sql-{{ page.release_info.version }}.darwin-10.9-amd64/cockroach-sql /usr/local/bin && if [ ! -f /usr/local/bin/cockroach ]; then sudo ln -s /usr/local/bin/cockroach-sql /usr/local/bin/cockroach; fi -~~~ - -If you don't have an existing `cockroach` binary in `/usr/local/bin` this will create a symbolic link to `cockroach` so you can use the `cockroach sql` command. -
- -
- -Open a PowerShell terminal as an Administrator, then run the following command: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ErrorActionPreference = "Stop"; [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12;$ProgressPreference = 'SilentlyContinue'; $null = New-Item -Type Directory -Force $env:appdata/cockroach; Invoke-WebRequest -Uri https://binaries.cockroachdb.com/cockroach-sql-{{ page.release_info.version }}.windows-6.2-amd64.zip -OutFile cockroach-sql.zip; Expand-Archive -Force -Path cockroach-sql.zip; Copy-Item -Force "cockroach-sql/cockroach-sql-{{ page.release_info.version }}.windows-6.2-amd64/cockroach-sql.exe" -Destination $env:appdata/cockroach; $Env:PATH += ";$env:appdata/cockroach"; if (!(Test-Path "$env:appdata/cockroach/cockroach.exe")) { New-Item -ItemType SymbolicLink -Path $env:appdata/cockroach/cockroach.exe -Target $env:appdata/cockroach/cockroach-sql.exe } -~~~ - -If you don't have an existing `cockroach` binary in `$env:appdata/cockroach/` this will create a symbolic link to `cockroach` so you can use the `cockroach sql` command. - -
- -Or you can download the [binary from the releases page](../releases/{{ page.version.version }}.html) and install it manually. - -## Before you begin - -- The [role option of the user](create-role.html#role-options) logging in must be `LOGIN` or `SQLLOGIN`, which are granted by default. If the user's role option has been set to `NOLOGIN` or `NOSQLLOGIN`, the user cannot log in using the SQL CLI with any authentication method. -- **macOS users only:** By default, macOS-based terminals do not enable handling of the Alt key modifier. This prevents access to many keyboard shortcuts in the unix shell and `cockroach sql`. See the section [macOS terminal configuration](#macos-terminal-configuration) below for details. - -## Synopsis - -Start the interactive SQL shell: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach-sql -~~~ - -Execute SQL from the command line: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach-sql -e=";" -e="" -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ echo ";" | cockroach-sql -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach-sql -f file-containing-statements.sql -~~~ - -Exit the interactive SQL shell: - -{% include_cached copy-clipboard.html %} -~~~ sql -> \q -~~~ - -View help: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach-sql --help -~~~ - -## Flags - -The `sql` command supports the following types of flags: - -- [General Use](#general) -- [Client Connection](#client-connection) -- [Logging](#logging) - -### General - -- To start an interactive SQL shell, run `cockroach-sql` with all appropriate connection flags or use just the [`--url` flag](#sql-flag-url), which includes [connection details](connection-parameters.html#connect-using-a-url). -- To execute SQL statements from the command line, use the [`--execute` flag](#sql-flag-execute). - -Flag | Description ------|------------ -`--database`

`-d` | A database name to use as [current database](sql-name-resolution.html#current-database) in the newly created session. -`--embedded` | Minimizes the SQL shell [welcome text](#welcome-message) to be appropriate for embedding in playground-type environments. Specifically, this flag removes details that users in an embedded environment have no control over (e.g., networking information). -`--echo-sql` | Reveal the SQL statements sent implicitly by the command-line utility. For a demonstration, see the [example](#reveal-the-sql-statements-sent-implicitly-by-the-command-line-utility) below.

This can also be enabled within the interactive SQL shell via the `\set echo` [shell command](#commands). - `--execute`

`-e` | Execute SQL statements directly from the command line, without opening a shell. This flag can be set multiple times, and each instance can contain one or more statements separated by semi-colons. If an error occurs in any statement, the command exits with a non-zero status code and further statements are not executed. The results of each statement are printed to the standard output (see `--format` for formatting options).

For a demonstration of this and other ways to execute SQL from the command line, see the [example](#execute-sql-statements-from-the-command-line) below. -`--file `

`-f ` | Read SQL statements from ``. - `--format` | How to display table rows printed to the standard output. Possible values: `tsv`, `csv`, `table`, `raw`, `records`, `sql`, `html`.

**Default:** `table` for sessions that [output on a terminal](#session-and-output-types); `tsv` otherwise

This flag corresponds to the `display_format` [client-side option](#client-side-options). -`--read-only` | Sets the `default_transaction_read_only` [session variable](show-vars.html#supported-variables) to `on` upon connecting. -`--safe-updates` | Disallow potentially unsafe SQL statements, including `DELETE` without a `WHERE` clause, `UPDATE` without a `WHERE` clause, and `ALTER TABLE ... DROP COLUMN`.

**Default:** `true` for [interactive sessions](#session-and-output-types); `false` otherwise

Potentially unsafe SQL statements can also be allowed/disallowed for an entire session via the `sql_safe_updates` [session variable](set-vars.html). -`--set` | Set a [client-side option](#client-side-options) before starting the SQL shell or executing SQL statements from the command line via `--execute`. This flag may be specified multiple times, once per option.

After starting the SQL shell, the `\set` and `unset` commands can be use to enable and disable client-side options as well. -`--watch` | Repeat the SQL statements specified with `--execute` or `-e` until a SQL error occurs or the process is terminated. `--watch` applies to all `--execute` or `-e` flags in use.
You must also specify an interval at which to repeat the statement, followed by a time unit. For example, to specify an interval of 5 seconds, use `5s`.

Note that this flag is intended for simple monitoring scenarios during development and testing. See the [example](#repeat-a-sql-statement) below. - - -### Client connection - -{% include {{ page.version.version }}/sql/connection-parameters.md %} - -See [Client Connection Parameters](connection-parameters.html) for more details. - -### Logging - -{% include {{ page.version.version }}/misc/logging-defaults.md %} - -## Session and output types - -`cockroach-sql` exhibits different behaviors depending on whether or not the session is interactive and/or whether or not the session outputs on a terminal. - -- A session is **interactive** when `cockroach-sql` is invoked without the `-e` or `-f` flag, and the input is a terminal. In such cases: - - The [`errexit` option](#sql-option-errexit) defaults to `false`. - - The [`check_syntax` option](#sql-option-check-syntax) defaults to `true` if supported by the CockroachDB server (this is checked when the shell starts up). - - **Ctrl+C** at the prompt will only terminate the shell if no other input was entered on the same line already. - - The shell will attempt to set the `safe_updates` [session variable](set-vars.html) to `true` on the server. - - The shell continues to read input after the last command entered. -- A session **outputs on a terminal** when output is not redirected to a file. In such cases: - - The [`--format` flag](#sql-flag-format) and its corresponding [`display_format` option](#sql-option-display-format) default to `table`. These default to `tsv` otherwise. - - The `show_times` option defaults to `true`. - -When a session is both interactive and outputs on a terminal, `cockroach-sql` also activates the interactive prompt with a line editor that can be used to modify the current line of input. Also, command history becomes active. - -## SQL shell - -### Welcome message - -When the SQL shell connects (or reconnects) to a CockroachDB node, it prints a welcome text with some tips and CockroachDB version and cluster details: - -~~~ shell -# -# Welcome to the CockroachDB SQL shell. -# All statements must be terminated by a semicolon. -# To exit, type: \q. -# -# Server version: CockroachDB CCL {{page.release_info.version}} (x86_64-apple-darwin17.7.0, built {{page.release_info.build_time}}) (same version as client) -# Cluster ID: 7fb9f5b4-a801-4851-92e9-c0db292d03f1 -# -# Enter \? for a brief introduction. -# -> -~~~ - -The **Version** and **Cluster ID** details are particularly noteworthy: - -- When the client and server versions of CockroachDB are the same, the shell prints the `Server version` followed by `(same version as client)`. -- When the client and server versions are different, the shell prints both the `Client version` and `Server version`. In this case, you may want to [plan an upgrade](upgrade-cockroach-version.html) of earlier client or server versions. -- Since every CockroachDB cluster has a unique ID, you can use the `Cluster ID` field to verify that your client is always connecting to the correct cluster. - -### Commands - -{% include {{ page.version.version }}/sql/shell-commands.md %} - -### Client-side options - -{% include {{ page.version.version }}/sql/shell-options.md %} - -### Help - -{% include {{ page.version.version }}/sql/shell-help.md %} - -### Shortcuts - -{% include {{ page.version.version }}/sql/shell-shortcuts.md %} - -### macOS terminal configuration - -{% include {{ page.version.version }}/sql/macos-terminal-configuration.md %} - -### Error messages and `SQLSTATE` codes - -{% include {{ page.version.version }}/sql/sql-errors.md %} - -## Examples - -{% include {{ page.version.version }}/sql/sql-examples.md %} - -## See also - -- [Client Connection Parameters](connection-parameters.html) -- [`cockroach demo`](cockroach-demo.html) -- [`cockroach` Commands Overview](cockroach-commands.html) -- [SQL Statements](sql-statements.html) -- [Learn CockroachDB SQL](learn-cockroachdb-sql.html) diff --git a/src/current/v22.1/cockroach-sql.md b/src/current/v22.1/cockroach-sql.md deleted file mode 100644 index 0a7b1dd4b94..00000000000 --- a/src/current/v22.1/cockroach-sql.md +++ /dev/null @@ -1,185 +0,0 @@ ---- -title: cockroach sql -summary: CockroachDB comes with a built-in client for executing SQL statements from an interactive shell or directly from the command line. -toc: true -key: use-the-built-in-sql-client.html -docs_area: reference.cli ---- - -CockroachDB comes with a built-in client for executing SQL statements from an interactive shell or directly from the command line. To use this client, run the `cockroach sql` [command](cockroach-commands.html) as described below. - -To exit the interactive shell, use **\q**, **quit**, **exit**, or **Ctrl+D**. - -{{site.data.alerts.callout_success}} -If you want to experiment with CockroachDB SQL but do not have a cluster already running, you can use the [`cockroach demo`](cockroach-demo.html) command to open a shell to a temporary, in-memory cluster. -{{site.data.alerts.end}} - -The output of `cockroach sql` when used non-interactively is part of a stable interface, and can be used programmatically, with the exception of informational output lines that begin with the hash symbol (`#`). Informational output can change from release to release, and should not be used programmatically. - -## Before you begin - -- The [role option of the user](create-role.html#role-options) logging in must be `LOGIN` or `SQLLOGIN`, which are granted by default. If the user's role option has been set to `NOLOGIN` or `NOSQLLOGIN`, the user cannot log in using the SQL CLI with any authentication method. -- **macOS users only:** By default, macOS-based terminals do not enable handling of the Alt key modifier. This prevents access to many keyboard shortcuts in the unix shell and `cockroach sql`. See the section [macOS terminal configuration](#macos-terminal-configuration) below for details. - -## Synopsis - -Start the interactive SQL shell: - -~~~ shell -$ cockroach sql -~~~ - -Execute SQL from the command line: - -~~~ shell -$ cockroach sql --execute=";" --execute="" -~~~ -~~~ shell -$ echo ";" | cockroach sql -~~~ -~~~ shell -$ cockroach sql --file file-containing-statements.sql -~~~ - -Exit the interactive SQL shell: - -~~~ sql -> \q -~~~ - -~~~ sql -> quit -~~~ - -~~~ sql -> exit -~~~ - -~~~ shell -> Ctrl+D -~~~ - -View help: - -~~~ shell -$ cockroach sql --help -~~~ - -## Flags - -The `sql` command supports the following types of flags: - -- [General Use](#general) -- [Client Connection](#client-connection) -- [Logging](#logging) - -### General - -- To start an interactive SQL shell, run `cockroach sql` with all appropriate connection flags or use just the [`--url` flag](#sql-flag-url), which includes [connection details](connection-parameters.html#connect-using-a-url). -- To execute SQL statements from the command line, use the [`--execute` flag](#sql-flag-execute). - -Flag | Description ------|------------ -`--database`

`-d` | A database name to use as [current database](sql-name-resolution.html#current-database) in the newly created session. -`--embedded` | Minimizes the SQL shell [welcome text](#welcome-message) to be appropriate for embedding in playground-type environments. Specifically, this flag removes details that users in an embedded environment have no control over (e.g., networking information). -`--echo-sql` | Reveal the SQL statements sent implicitly by the command-line utility. For a demonstration, see the [example](#reveal-the-sql-statements-sent-implicitly-by-the-command-line-utility) below.

This can also be enabled within the interactive SQL shell via the `\set echo` [shell command](#commands). - `--execute`

`-e` | Execute SQL statements directly from the command line, without opening a shell. This flag can be set multiple times, and each instance can contain one or more statements separated by semi-colons. If an error occurs in any statement, the command exits with a non-zero status code and further statements are not executed. The results of each statement are printed to the standard output (see `--format` for formatting options).

For a demonstration of this and other ways to execute SQL from the command line, see the [example](#execute-sql-statements-from-the-command-line) below. -`--file `

`-f ` | Read SQL statements from ``. - `--format` | How to display table rows printed to the standard output. Possible values: `tsv`, `csv`, `table`, `raw`, `records`, `sql`, `html`.

**Default:** `table` for sessions that [output on a terminal](#session-and-output-types); `tsv` otherwise

This flag corresponds to the `display_format` [client-side option](#client-side-options). -`--read-only` | **New in v22.1:** Sets the `default_transaction_read_only` [session variable](show-vars.html#supported-variables) to `on` upon connecting. -`--safe-updates` | Disallow potentially unsafe SQL statements, including `DELETE` without a `WHERE` clause, `UPDATE` without a `WHERE` clause, and `ALTER TABLE ... DROP COLUMN`.

**Default:** `true` for [interactive sessions](#session-and-output-types); `false` otherwise

Potentially unsafe SQL statements can also be allowed/disallowed for an entire session via the `sql_safe_updates` [session variable](set-vars.html). -`--set` | Set a [client-side option](#client-side-options) before starting the SQL shell or executing SQL statements from the command line via `--execute`. This flag may be specified multiple times, once per option.

After starting the SQL shell, the `\set` and `unset` commands can be use to enable and disable client-side options as well. -`--watch` | Repeat the SQL statements specified with `--execute` or `-e` until a SQL error occurs or the process is terminated. `--watch` applies to all `--execute` or `-e` flags in use.
You must also specify an interval at which to repeat the statement, followed by a time unit. For example, to specify an interval of 5 seconds, use `5s`.

Note that this flag is intended for simple monitoring scenarios during development and testing. See the [example](#repeat-a-sql-statement) below. - - -### Client connection - -{% include {{ page.version.version }}/sql/connection-parameters.md %} - -See [Client Connection Parameters](connection-parameters.html) for more details. - -### Logging - -{% include {{ page.version.version }}/misc/logging-defaults.md %} - -## Session and output types - -`cockroach sql` exhibits different behaviors depending on whether or not the session is interactive and/or whether or not the session outputs on a terminal. - -- A session is **interactive** when `cockroach sql` is invoked without the `-e` or `-f` flag, and the input is a terminal. In such cases: - - The [`errexit` option](#sql-option-errexit) defaults to `false`. - - The [`check_syntax` option](#sql-option-check-syntax) defaults to `true` if supported by the CockroachDB server (this is checked when the shell starts up). - - **Ctrl+C** at the prompt will only terminate the shell if no other input was entered on the same line already. - - The shell will attempt to set the `safe_updates` [session variable](set-vars.html) to `true` on the server. - - The shell continues to read input after the last command entered. -- A session **outputs on a terminal** when output is not redirected to a file. In such cases: - - The [`--format` flag](#sql-flag-format) and its corresponding [`display_format` option](#sql-option-display-format) default to `table`. These default to `tsv` otherwise. - - The `show_times` option defaults to `true`. - -When a session is both interactive and outputs on a terminal, `cockroach sql` also activates the interactive prompt with a line editor that can be used to modify the current line of input. Also, command history becomes active. - -## SQL shell - -### Welcome message - -When the SQL shell connects (or reconnects) to a CockroachDB node, it prints a welcome text with some tips and CockroachDB version and cluster details: - -~~~ shell -# -# Welcome to the CockroachDB SQL shell. -# All statements must be terminated by a semicolon. -# To exit, type: \q. -# -# Server version: CockroachDB CCL {{page.release_info.version}} (x86_64-apple-darwin17.7.0, built 2019/09/13 00:07:19, go1.12.6) (same version as client) -# Cluster ID: 7fb9f5b4-a801-4851-92e9-c0db292d03f1 -# -# Enter \? for a brief introduction. -# -> -~~~ - -The **Version** and **Cluster ID** details are particularly noteworthy: - -- When the client and server versions of CockroachDB are the same, the shell prints the `Server version` followed by `(same version as client)`. -- When the client and server versions are different, the shell prints both the `Client version` and `Server version`. In this case, you may want to [plan an upgrade](upgrade-cockroach-version.html) of older client or server versions. -- Since every CockroachDB cluster has a unique ID, you can use the `Cluster ID` field to verify that your client is always connecting to the correct cluster. - - {{site.data.alerts.callout_info}} - For clusters deployed in CockroachDB {{ site.data.products.cloud }}, do not use the cluster ID printed in the welcome message to verify the cluster your client is connected to. Instead, use the `ccloud cluster list` command to list the ID of each cluster in your CockroachDB {{ site.data.products.cloud }} organization to which you have access. To learn more about the `ccloud` command, refer to [Get Started with the `ccloud` CLI]({% link cockroachcloud/ccloud-get-started.md %}). - {{site.data.alerts.end}} - -### Commands - -{% include {{ page.version.version }}/sql/shell-commands.md %} - -### Client-side options - -{% include {{ page.version.version }}/sql/shell-options.md %} - -### Help - -{% include {{ page.version.version }}/sql/shell-help.md %} - -### Shortcuts - -{% include {{ page.version.version }}/sql/shell-shortcuts.md %} - -### macOS terminal configuration - -{% include {{ page.version.version }}/sql/macos-terminal-configuration.md %} - -### Error messages and `SQLSTATE` codes - -{% include {{ page.version.version }}/sql/sql-errors.md %} - -## Examples - -{% include {{ page.version.version }}/sql/sql-examples.md %} - -## See also - -- [Client Connection Parameters](connection-parameters.html) -- [`cockroach demo`](cockroach-demo.html) -- [`cockroach` Commands Overview](cockroach-commands.html) -- [SQL Statements](sql-statements.html) -- [Learn CockroachDB SQL](learn-cockroachdb-sql.html) diff --git a/src/current/v22.1/cockroach-sqlfmt.md b/src/current/v22.1/cockroach-sqlfmt.md deleted file mode 100644 index 7e71f373398..00000000000 --- a/src/current/v22.1/cockroach-sqlfmt.md +++ /dev/null @@ -1,161 +0,0 @@ ---- -title: cockroach sqlfmt -summary: Use cockroach sqlfmt to enhance the text layout of a SQL query. -toc: true -key: use-the-query-formatter.html -docs_area: reference.cli ---- - -The `cockroach sqlfmt` -[command](cockroach-commands.html) changes the textual formatting of -one or more SQL queries. It recognizes all SQL extensions supported by -CockroachDB. - -A [web interface to this feature](https://sqlfum.pt/) is also available. - -{{site.data.alerts.callout_info}} -{% include feature-phases/preview.md %} -{{site.data.alerts.end}} - -## Synopsis - -Use the query formatter interactively: - -~~~ shell -$ cockroach sqlfmt - -CTRL+D -~~~ - -Reformat a SQL query given on the command line: - -~~~ shell -$ cockroach sqlfmt -e "" -~~~ - -Reformat a SQL query already stored in a file: - -~~~ shell -$ cat query.sql | cockroach sqlfmt -~~~ - -## Flags - -The `sqlfmt` command supports the following flags. - -Flag | Description | Default value ------|------|---- -`--execute`
`-e` | Reformat the given SQL query, without reading from standard input. | N/A -`--print-width` | Desired column width of the output. | 80 -`--tab-width` | Number of spaces occupied by a tab character on the final display device. | 4 -`--use-spaces` | Always use space characters for formatting; avoid tab characters. | Use tabs. -`--align` | Use vertical alignment during formatting. | Do not align vertically. -`--no-simplify` | Avoid removing optional grouping parentheses during formatting. | Remove unnecessary grouping parentheses. - -## Examples - -### Reformat a query with constrained column width - -Using the interactive query formatter, output with the default column width (80 columns): - -1. Start the interactive query formatter: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sqlfmt - ~~~ - -2. Press **Enter**. - -3. Run the query: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE animals (id INT PRIMARY KEY DEFAULT unique_rowid(), name STRING); - ~~~ -4. Press **CTRL+D**. - - ~~~ sql - CREATE TABLE animals ( - id INT PRIMARY KEY DEFAULT unique_rowid(), - name STRING - ) - ~~~ - -Using the command line, output with the column width set to `40`: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sqlfmt --print-width 40 -e "CREATE TABLE animals (id INT PRIMARY KEY DEFAULT unique_rowid(), name STRING);" -~~~ - -~~~ sql -CREATE TABLE animals ( - id - INT - PRIMARY KEY - DEFAULT unique_rowid(), - name STRING -) -~~~ - -### Reformat a query with vertical alignment - -Output with the default vertical alignment: - -~~~ shell -$ cockroach sqlfmt -e "SELECT winner, round(length / (60 * 5)) AS counter FROM players WHERE build = $1 AND (hero = $2 OR region = $3);" -~~~ - -~~~ sql -SELECT -winner, round(length / (60 * 5)) AS counter -FROM -players -WHERE -build = $1 AND (hero = $2 OR region = $3) -~~~ - -Output with vertical alignment: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sqlfmt --align -e "SELECT winner, round(length / (60 * 5)) AS counter FROM players WHERE build = $1 AND (hero = $2 OR region = $3);" -~~~ - -~~~ sql -SELECT winner, round(length / (60 * 5)) AS counter - FROM players - WHERE build = $1 AND (hero = $2 OR region = $3); -~~~ - -### Reformat a query with simplification of parentheses - -Output with the default simplification of parentheses: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sqlfmt -e "SELECT (1 * 2) + 3, (1 + 2) * 3;" -~~~ - -~~~ sql -SELECT 1 * 2 + 3, (1 + 2) * 3 -~~~ - -Output with no simplification of parentheses: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sqlfmt --no-simplify -e "SELECT (1 * 2) + 3, (1 + 2) * 3;" -~~~ - -~~~ sql -SELECT (1 * 2) + 3, (1 + 2) * 3 -~~~ - -## See also - -- [Sequel Fumpt](https://sqlfum.pt/) -- [`cockroach demo`](cockroach-demo.html) -- [`cockroach sql`](cockroach-sql.html) -- [`cockroach` Commands Overview](cockroach-commands.html) diff --git a/src/current/v22.1/cockroach-start-single-node.md b/src/current/v22.1/cockroach-start-single-node.md deleted file mode 100644 index fe86eded870..00000000000 --- a/src/current/v22.1/cockroach-start-single-node.md +++ /dev/null @@ -1,419 +0,0 @@ ---- -title: cockroach start-single-node -summary: The cockroach start-single-node command starts a single-node cluster with replication disabled. -toc: true -docs_area: reference.cli ---- - -This page explains the `cockroach start-single-node` [command](cockroach-commands.html), which you use to start a single-node cluster with replication disabled. A single-node cluster is all you need for quick SQL testing or app development. - -{{site.data.alerts.callout_success}} -To run a multi-node cluster with replicated data for availability and consistency, use [`cockroach start`](cockroach-start.html) and [`cockroach init`](cockroach-init.html). -{{site.data.alerts.end}} - -## Synopsis - -Start a single-node cluster: - -~~~ shell -$ cockroach start-single-node -~~~ - -View help: - -~~~ shell -$ cockroach start-single-node --help -~~~ - -## Flags - -The `cockroach start-single-node` command supports the following [general-use](#general), [networking](#networking), [security](#security), and [logging](#logging) flags. - -Many flags have useful defaults that can be overridden by specifying the flags explicitly. If you specify flags explicitly, however, be sure to do so each time the node is restarted, as they will not be remembered. - -{{site.data.alerts.callout_info}} -The `cockroach start-single-node` flags are identical to [`cockroach start`](cockroach-start.html#flags) flags. However, many of them are not relevant for single-node clusters but are provided for users who want to test concepts that appear in multi-node clusters. These flags are called out as such. In most cases, accepting most defaults is sufficient (see the [examples](#examples) below). -{{site.data.alerts.end}} - -### General - -Flag | Description ------|----------- -`--attrs` | **Not relevant for single-node clusters.** Arbitrary strings, separated by colons, specifying node capability, which might include specialized hardware or number of cores, for example:

`--attrs=ram:64gb`

These can be used to influence the location of data replicas. See [Configure Replication Zones](configure-replication-zones.html#replication-constraints) for full details. -`--background` | Runs the node in the background. Control is returned to the shell only once the node is ready to accept requests, so this is recommended over appending `&` to the command. This flag is **not** available in Windows environments.

**Note:** `--background` is suitable for writing automated test suites or maintenance procedures that need a temporary server process running in the background. It is not intended to be used to start a long-running server, because it does not fully detach from the controlling terminal. Consider using a service manager or a tool like [daemon(8)](https://www.freebsd.org/cgi/man.cgi?query=daemon&sektion=8) instead. -`--cache` | The total size for caches, shared evenly if there are multiple storage devices. This can be a percentage (notated as a decimal or with `%`) or any bytes-based unit, for example:

`--cache=.25`
`--cache=25%`
`--cache=1000000000 ----> 1000000000 bytes`
`--cache=1GB ----> 1000000000 bytes`
`--cache=1GiB ----> 1073741824 bytes`

Note: If you use the `%` notation, you might need to escape the `%` sign, for instance, while configuring CockroachDB through `systemd` service files. For this reason, it's recommended to use the decimal notation instead.

**Note:** The sum of `--cache`, `--max-sql-memory`, and `--max-tsdb-memory` should not exceed 75% of the memory available to the `cockroach` process.

**Default:** `128MiB`

The default cache size is reasonable for local development clusters. For production deployments, this should be increased to 25% or higher. Increasing the cache size will generally improve the node's read performance. See [Recommended Production Settings](recommended-production-settings.html#cache-and-sql-memory-size) for more details. -`--external-io-dir` | The path of the external IO directory with which the local file access paths are prefixed while performing backup and restore operations using local node directories or NFS drives. If set to `disabled`, backups and restores using local node directories and NFS drives are disabled.

**Default:** `extern` subdirectory of the first configured [`store`](#store).

To set the `--external-io-dir` flag to the locations you want to use without needing to restart nodes, create symlinks to the desired locations from within the `extern` directory. -`--listening-url-file` | The file to which the node's SQL connection URL will be written on successful startup, in addition to being printed to the [standard output](#standard-output).

This is particularly helpful in identifying the node's port when an unused port is assigned automatically (`--port=0`). -`--locality` | **Not relevant for single-node clusters.** Arbitrary key-value pairs that describe the location of the node. Locality might include country, region, datacenter, rack, etc. For more details, see [Locality](cockroach-start.html#locality) below. -`--max-disk-temp-storage` | The maximum on-disk storage capacity available to store temporary data for SQL queries that exceed the memory budget (see `--max-sql-memory`). This ensures that JOINs, sorts, and other memory-intensive SQL operations are able to spill intermediate results to disk. This can be a percentage (notated as a decimal or with `%`) or any bytes-based unit (e.g., `.25`, `25%`, `500GB`, `1TB`, `1TiB`).

Note: If you use the `%` notation, you might need to escape the `%` sign, for instance, while configuring CockroachDB through `systemd` service files. For this reason, it's recommended to use the decimal notation instead. Also, if expressed as a percentage, this value is interpreted relative to the size of the first store. However, the temporary space usage is never counted towards any store usage; therefore, when setting this value, it's important to ensure that the size of this temporary storage plus the size of the first store doesn't exceed the capacity of the storage device.

The temporary files are located in the path specified by the `--temp-dir` flag, or in the subdirectory of the first store (see `--store`) by default.

**Default:** `32GiB` -`--max-sql-memory` | The maximum in-memory storage capacity available to store temporary data for SQL queries, including prepared queries and intermediate data rows during query execution. This can be a percentage (notated as a decimal or with `%`) or any bytes-based unit; for example:

`--max-sql-memory=.25`
`--max-sql-memory=25%`
`--max-sql-memory=10000000000 ----> 1000000000 bytes`
`--max-sql-memory=1GB ----> 1000000000 bytes`
`--max-sql-memory=1GiB ----> 1073741824 bytes`

The temporary files are located in the path specified by the `--temp-dir` flag, or in the subdirectory of the first store (see `--store`) by default.

**Note:** If you use the `%` notation, you might need to escape the `%` sign (for instance, while configuring CockroachDB through `systemd` service files). For this reason, it's recommended to use the decimal notation instead.

**Note:** The sum of `--cache`, `--max-sql-memory`, and `--max-tsdb-memory` should not exceed 75% of the memory available to the `cockroach` process.

**Default:** `25%`

The default SQL memory size is suitable for production deployments but can be raised to increase the number of simultaneous client connections the node allows as well as the node's capacity for in-memory processing of rows when using `ORDER BY`, `GROUP BY`, `DISTINCT`, joins, and window functions. For local development clusters with memory-intensive workloads, reduce this value to, for example, `128MiB` to prevent [out-of-memory errors](cluster-setup-troubleshooting.html#out-of-memory-oom-crash). -`--max-tsdb-memory` | Maximum memory capacity available to store temporary data for use by the time-series database to display metrics in the [DB Console](ui-overview.html). Consider raising this value if your cluster is comprised of a large number of nodes where individual nodes have very limited memory available (e.g., under `8 GiB`). Insufficient memory capacity for the time-series database can constrain the ability of the DB Console to process the time-series queries used to render metrics for the entire cluster. This capacity constraint does not affect SQL query execution. This flag accepts numbers interpreted as bytes, size suffixes (e.g., `1GB` and `1GiB`) or a percentage of physical memory (e.g., `0.01`).

**Note:** The sum of `--cache`, `--max-sql-memory`, and `--max-tsdb-memory` should not exceed 75% of the memory available to the `cockroach` process.

**Default:** `0.01` (i.e., 1%) of physical memory or `64 MiB`, whichever is greater. -`--pid-file` | The file to which the node's process ID will be written on successful startup. When this flag is not set, the process ID is not written to file. -`--store`
`-s` | The file path to a storage device and, optionally, store attributes and maximum size. When using multiple storage devices for a node, this flag must be specified separately for each device, for example:

`--store=/mnt/ssd01 --store=/mnt/ssd02`

For more details, see [Store](#store) below. -`--temp-dir` | The path of the node's temporary store directory. The temporary store directory is used primarily as working memory for distributed computations and importing from CSV data sources. On node start-up, the location for the temporary files is printed to the standard output.

**Default:** Subdirectory of the first [store](#store) - -### Networking - -Flag | Description ------|----------- -`--listen-addr` | The IP address/hostname and port to listen on for connections from clients. For IPv6, use the notation `[...]`, e.g., `[::1]` or `[fe80::f6f2:::]`.

**Default:** Listen on all IP addresses on port `26257` -`--http-addr` | The IP address/hostname and port to listen on for DB Console HTTP requests. For IPv6, use the notation `[...]`, e.g., `[::1]:8080` or `[fe80::f6f2:::]:8080`.

**Default:** Listen on the address part of `--listen-addr` on port `8080` -`--socket-dir` | The directory path on which to listen for [Unix domain socket](https://en.wikipedia.org/wiki/Unix_domain_socket) connections from clients installed on the same Unix-based machine. For an example, see [Connect to a cluster listening for Unix domain socket connections](cockroach-sql.html#connect-to-a-cluster-listening-for-unix-domain-socket-connections). - -### Security - -Flag | Description ------|----------- -`--certs-dir` | The path to the [certificate directory](cockroach-cert.html). The directory must contain valid certificates if running in secure mode.

**Default:** `${HOME}/.cockroach-certs/` -`--insecure` | **Note:** The `--insecure` flag is intended for **non-production testing only**.

Run in insecure mode, skipping all TLS encryption and authentication. If this flag is not set, the `--certs-dir` flag must point to valid certificates.

**Note the following risks:** An insecure cluster is open to any client that can access any node's IP addresses; client connections must also be made insecurely; any user, even `root`, can log in without providing a password; any user, connecting as `root`, can read or write any data in your cluster; there is no network encryption or authentication, and thus no confidentiality.

**Default:** `false` -`--accept-sql-without-tls` | This flag (in [preview](cockroachdb-feature-availability.html)) allows you to connect to the cluster using a SQL user's password without [validating the client's certificate](authentication.html#client-authentication). When connecting using the built-in SQL client, [use the `--insecure` flag with the `cockroach sql` command](cockroach-sql.html#client-connection). -`--cert-principal-map` | A comma-separated list of `cert-principal:db-principal` mappings used to map the certificate principals to IP addresses, DNS names, and SQL users. This allows the use of certificates generated by Certificate Authorities that place restrictions on the contents of the `commonName` field. For usage information, see [Create Security Certificates using Openssl](create-security-certificates-openssl.html#examples). -`--enterprise-encryption` | This optional flag specifies the encryption options for one of the stores on the node. If multiple stores exist, the flag must be specified for each store.

This flag takes a number of options. For a complete list of options, and usage instructions, see [Encryption at Rest](encryption.html).

Note that this is an [Enterprise feature](enterprise-licensing.html). - -### Store - -The `--store` flag supports the following fields. Note that commas are used to separate fields, and so are forbidden in all field values. - -{{site.data.alerts.callout_info}} -In-memory storage is not suitable for production deployments at this time. -{{site.data.alerts.end}} - -Field | Description -------|------------ -`type` | For in-memory storage, set this field to `mem`; otherwise, leave this field out. The `path` field must not be set when `type=mem`. -`path` | The file path to the storage device. When not setting `attr`, `size`, or `ballast-size`, the `path` field label can be left out:

`--store=/mnt/ssd01`

When either of those fields are set, however, the `path` field label must be used:

`--store=path=/mnt/ssd01,size=20GB`

**Default:** `cockroach-data` -`attrs` | Arbitrary strings, separated by colons, specifying disk type or capability. These can be used to influence the location of data replicas. See [Configure Replication Zones](configure-replication-zones.html#replication-constraints) for full details.

In most cases, node-level `--locality` or `--attrs` are preferable to store-level attributes, but this field can be used to match capabilities for storage of individual databases or tables. For example, an OLTP database would probably want to allocate space for its tables only on solid state devices, whereas append-only time series might prefer cheaper spinning drives. Typical attributes include whether the store is flash (`ssd`) or spinny disk (`hdd`), as well as speeds and other specs, for example:

`--store=path=/mnt/hda1,attrs=hdd:7200rpm` - `size` | The maximum size allocated to the node. When this size is reached, CockroachDB attempts to rebalance data to other nodes with available capacity. When no other nodes have available capacity, this limit will be exceeded. Data may also be written to the node faster than the cluster can rebalance it away; as long as capacity is available elsewhere, CockroachDB will gradually rebalance data down to the store limit.

The `size` can be specified either in a bytes-based unit or as a percentage of hard drive space (notated as a decimal or with `%`), for example:

`--store=path=/mnt/ssd01,size=10000000000 ----> 10000000000 bytes`
`--store=path=/mnt/ssd01,size=20GB ----> 20000000000 bytes`
`--store=path=/mnt/ssd01,size=20GiB ----> 21474836480 bytes`
`--store=path=/mnt/ssd01,size=0.02TiB ----> 21474836480 bytes`
`--store=path=/mnt/ssd01,size=20% ----> 20% of available space`
`--store=path=/mnt/ssd01,size=0.2 ----> 20% of available space`
`--store=path=/mnt/ssd01,size=.2 ----> 20% of available space`

**Default:** 100%

For an in-memory store, the `size` field is required and must be set to the true maximum bytes or percentage of available memory, for example:

`--store=type=mem,size=20GB`
`--store=type=mem,size=90%`

Note: If you use the `%` notation, you might need to escape the `%` sign, for instance, while configuring CockroachDB through `systemd` service files. For this reason, it's recommended to use the decimal notation instead. - `ballast-size` | Configure the size of the automatically created emergency ballast file. Accepts the same value formats as the [`size` field](#store-size). For more details, see [Automatic ballast files](cluster-setup-troubleshooting.html#automatic-ballast-files).

To disable automatic ballast file creation, set the value to `0`:

`--store=path=/mnt/ssd01,ballast-size=0` - -### Logging - -By default, `cockroach start-single-node` writes all messages to log files, and prints nothing to `stderr`. This includes events with `INFO` [severity](logging.html#logging-levels-severities) and higher. However, you can [customize the logging behavior](configure-logs.html) of this command by using the `--log` flag: - -{% include {{ page.version.version }}/misc/logging-flags.md %} - -#### Defaults - -See the [default logging configuration](configure-logs.html#default-logging-configuration). - -## Standard output - -When you run `cockroach start-single-node`, some helpful details are printed to the standard output: - -~~~ shell -CockroachDB node starting at {{ now | date: "%Y-%m-%d %H:%M:%S.%6 +0000 UTC" }} -build: CCL {{page.release_info.version}} @ {{ site.data.releases | where_exp: "release", "release.major_version == page.version.version" | where_exp: "release", "release.withdrawn != true" | sort: "release_date" | last | map: "release_date" | date: "%Y/%m/%d 12:34:56" }} {{ site.data.releases | where_exp: "release", "release.major_version == page.version.version" | where_exp: "release", "release.withdrawn != true" | sort: "release_date" | last | map: "go_version" }} -webui: http://localhost:8080 -sql: postgresql://root@localhost:26257?sslmode=disable -sql (JDBC): jdbc:postgresql://localhost:26257/defaultdb?sslmode=disable&user=root -RPC client flags: cockroach --host=localhost:26257 --insecure -logs: /Users//node1/logs -temp dir: /Users//node1/cockroach-temp242232154 -external I/O path: /Users//node1/extern -store[0]: path=/Users//node1 -status: initialized new cluster -clusterID: 8a681a16-9623-4fc1-a537-77e9255daafd -nodeID: 1 -~~~ - -{{site.data.alerts.callout_success}} -These details are also written to the `INFO` log in the `/logs` directory. You can retrieve them with a command like `grep 'node starting' node1/logs/cockroach.log -A 11`. -{{site.data.alerts.end}} - -Field | Description -------|------------ -`build` | The version of CockroachDB you are running. -`webui` | The URL for accessing the DB Console. -`sql` | The connection URL for your client. -`RPC client flags` | The flags to use when connecting to the node via [`cockroach` client commands](cockroach-commands.html). -`logs` | The directory containing debug log data. -`temp dir` | The temporary store directory of the node. -`external I/O path` | The external IO directory with which the local file access paths are prefixed while performing [backup](backup.html) and [restore](restore.html) operations using local node directories or NFS drives. -`attrs` | If node-level attributes were specified in the `--attrs` flag, they are listed in this field. These details are potentially useful for [configuring replication zones](configure-replication-zones.html). -`locality` | If values describing the locality of the node were specified in the `--locality` field, they are listed in this field. These details are potentially useful for [configuring replication zones](configure-replication-zones.html). -`store[n]` | The directory containing store data, where `[n]` is the index of the store, e.g., `store[0]` for the first store, `store[1]` for the second store.

If store-level attributes were specified in the `attrs` field of the [`--store`](#store) flag, they are listed in this field as well. These details are potentially useful for [configuring replication zones](configure-replication-zones.html). -`status` | Whether the node is the first in the cluster (`initialized new cluster`), joined an existing cluster for the first time (`initialized new node, joined pre-existing cluster`), or rejoined an existing cluster (`restarted pre-existing node`). -`clusterID` | The ID of the cluster.

When trying to join a node to an existing cluster, if this ID is different than the ID of the existing cluster, the node has started a new cluster. This may be due to conflicting information in the node's data directory. For additional guidance, see the [troubleshooting](common-errors.html#node-belongs-to-cluster-cluster-id-but-is-attempting-to-connect-to-a-gossip-network-for-cluster-another-cluster-id) docs. -`nodeID` | The ID of the node. - -## Examples - -### Start a single-node cluster - -
- - -
- -
-1. Create two directories for certificates: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir certs my-safe-directory - ~~~ - - Directory | Description - ----------|------------ - `certs` | You'll generate your CA certificate and all node and client certificates and keys in this directory. - `my-safe-directory` | You'll generate your CA key in this directory and then reference the key when generating node and client certificates. - -2. Create the CA (Certificate Authority) certificate and key pair: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-ca \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -3. Create the certificate and key pair for the node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - localhost \ - $(hostname) \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -4. Create a client certificate and key pair for the `root` user: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-client \ - root \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -5. Start the single-node cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start-single-node \ - --certs-dir=certs \ - --listen-addr=localhost:26257 \ - --http-addr=localhost:8080 \ - --background - ~~~ -
- -
-

-{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start-single-node \ ---insecure \ ---listen-addr=localhost:26257 \ ---http-addr=localhost:8080 \ ---background -~~~ -
- -### Scale to multiple nodes - -Scaling a cluster started with `cockroach start-single-node` involves restarting the first node with the `cockroach start` command instead, and then adding new nodes with that command as well, all using a `--join` flag that forms them into a single multi-node cluster. Since replication is disabled in clusters started with `start-single-node`, you also need to enable replication to get CockroachDB's availability and consistency guarantees. - -
- - -
- -
- -1. Stop the single-node cluster: - - Get the process ID of the node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - ps -ef | grep cockroach | grep -v grep - ~~~ - - ~~~ - 501 19584 1 0 6:13PM ttys001 0:01.27 cockroach start-single-node --certs-dir=certs --listen-addr=localhost:26257 --http-addr=localhost:8080 - ~~~ - - Gracefully shut down the node, specifying its process ID: - - {% include_cached copy-clipboard.html %} - ~~~ shell - kill -TERM 19584 - ~~~ - - ~~~ - initiating graceful shutdown of server - server drained and shutdown completed - ~~~ - -1. Restart the node with the [`cockroach start`](cockroach-start.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --certs-dir=certs \ - --listen-addr=localhost:26257 \ - --http-addr=localhost:8080 \ - --join=localhost:26257,localhost:26258,localhost:26259 \ - --background - ~~~ - - The new flag to note is `--join`, which specifies the addresses and ports of the nodes that will initially comprise your cluster. You'll use this exact `--join` flag when starting other nodes as well. - - {% include {{ page.version.version }}/prod-deployment/join-flag-single-region.md %} - -1. Add two more nodes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --certs-dir=certs \ - --store=node2 \ - --listen-addr=localhost:26258 \ - --http-addr=localhost:8081 \ - --join=localhost:26257,localhost:26258,localhost:26259 \ - --background - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --certs-dir=certs \ - --store=node3 \ - --listen-addr=localhost:26259 \ - --http-addr=localhost:8082 \ - --join=localhost:26257,localhost:26258,localhost:26259 \ - --background - ~~~ - - These commands are the same as before but with unique `--store`, `--listen-addr`, and `--http-addr` flags, since this all nodes are running on the same machine. Also, since all nodes use the same hostname (`localhost`), you can use the first node's certificate. Note that this is different than running a production cluster, where you would need to generate a certificate and key for each node, issued to all common names and IP addresses you might use to refer to the node as well as to any load balancer instances. - -1. Open the [built-in SQL shell](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --certs-dir=certs --host=localhost:26257 - ~~~ - -1. Update preconfigured [replication zones](configure-replication-zones.html) to replicate user data 3 times and import internal data 5 times: - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER RANGE default CONFIGURE ZONE USING num_replicas = 3; - ALTER DATABASE system CONFIGURE ZONE USING num_replicas = 5; - ALTER RANGE meta CONFIGURE ZONE USING num_replicas = 5; - ALTER RANGE system CONFIGURE ZONE USING num_replicas = 5; - ALTER RANGE liveness CONFIGURE ZONE USING num_replicas = 5; - ALTER TABLE system.public.replication_constraint_stats CONFIGURE ZONE DISCARD; - ALTER TABLE system.public.replication_constraint_stats CONFIGURE ZONE USING gc.ttlseconds = 600, constraints = '[]', lease_preferences = '[]'; - ALTER TABLE system.public.replication_stats CONFIGURE ZONE DISCARD; - ALTER TABLE system.public.replication_stats CONFIGURE ZONE USING gc.ttlseconds = 600, constraints = '[]', lease_preferences = '[]'; - ALTER TABLE system.public.tenant_usage CONFIGURE ZONE DISCARD; - ALTER TABLE system.public.tenant_usage CONFIGURE ZONE USING gc.ttlseconds = 7200, constraints = '[]', lease_preferences = '[]'; - ~~~ - -
- -
- -1. Stop the single-node cluster: - - Get the process ID of the node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - ps -ef | grep cockroach | grep -v grep - ~~~ - - ~~~ - 501 19584 1 0 6:13PM ttys001 0:01.27 cockroach start-single-node --insecure --listen-addr=localhost:26257 --http-addr=localhost:8080 - ~~~ - - Gracefully shut down the node, specifying its process ID: - - {% include_cached copy-clipboard.html %} - ~~~ shell - kill -TERM 19584 - ~~~ - - ~~~ - initiating graceful shutdown of server - server drained and shutdown completed - ~~~ - -1. Restart the node with the [`cockroach start`](cockroach-start.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --listen-addr=localhost:26257 \ - --http-addr=localhost:8080 \ - --join=localhost:26257,localhost:26258,localhost:26259 \ - --background - ~~~ - - The new flag to note is `--join`, which specifies the addresses and ports of the nodes that will comprise your cluster. You'll use this exact `--join` flag when starting other nodes as well. - -1. Add two more nodes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --store=node2 \ - --listen-addr=localhost:26258 \ - --http-addr=localhost:8081 \ - --join=localhost:26257,localhost:26258,localhost:26259 \ - --background - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --store=node3 \ - --listen-addr=localhost:26259 \ - --http-addr=localhost:8082 \ - --join=localhost:26257,localhost:26258,localhost:26259 \ - --background - ~~~ - - These commands are the same as before but with unique `--store`, `--listen-addr`, and `--http-addr` flags, since this all nodes are running on the same machine. - -1. Open the [built-in SQL shell](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=localhost:26257 - ~~~ - -1. Update preconfigured [replication zones](configure-replication-zones.html) to replicate user data 3 times and import internal data 5 times: - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER RANGE default CONFIGURE ZONE USING num_replicas = 3; - ALTER DATABASE system CONFIGURE ZONE USING num_replicas = 5; - ALTER RANGE meta CONFIGURE ZONE USING num_replicas = 5; - ALTER RANGE system CONFIGURE ZONE USING num_replicas = 5; - ALTER RANGE liveness CONFIGURE ZONE USING num_replicas = 5; - ALTER TABLE system.public.replication_constraint_stats CONFIGURE ZONE DISCARD; - ALTER TABLE system.public.replication_constraint_stats CONFIGURE ZONE USING gc.ttlseconds = 600, constraints = '[]', lease_preferences = '[]'; - ALTER TABLE system.public.replication_stats CONFIGURE ZONE DISCARD; - ALTER TABLE system.public.replication_stats CONFIGURE ZONE USING gc.ttlseconds = 600, constraints = '[]', lease_preferences = '[]'; - ~~~ - -
- -## See also - -- Running a local multi-node cluster: - - [From Binary](start-a-local-cluster.html) - - [In Kubernetes](orchestrate-a-local-cluster-with-kubernetes.html) - - [In Docker](start-a-local-cluster-in-docker-mac.html) -- Running a distributed multi-node cluster: - - [From Binary](manual-deployment.html) - - [In Kubernetes](deploy-cockroachdb-with-kubernetes.html) -- [`cockroach` Commands Overview](cockroach-commands.html) diff --git a/src/current/v22.1/cockroach-start.md b/src/current/v22.1/cockroach-start.md deleted file mode 100644 index 3c6232be638..00000000000 --- a/src/current/v22.1/cockroach-start.md +++ /dev/null @@ -1,644 +0,0 @@ ---- -title: cockroach start -summary: Start a new multi-node cluster or add nodes to an existing multi-node cluster. -toc: true -key: start-a-node.html -docs_area: reference.cli ---- - -This page explains the `cockroach start` [command](cockroach-commands.html), which you use to start a new multi-node cluster or add nodes to an existing cluster. - -{{site.data.alerts.callout_success}} -If you need a simple single-node backend for app development, use [`cockroach start-single-node`](cockroach-start-single-node.html) instead, and follow the best practices for local testing described in [Test Your Application](local-testing.html). - -For quick SQL testing, consider using [`cockroach demo`](cockroach-demo.html) to start a temporary, in-memory cluster with immediate access to an interactive SQL shell. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -Node-level settings are defined by [flags](#flags) passed to the `cockroach start` command and cannot be changed without stopping and restarting the node. In contrast, some cluster-wide settings are defined via SQL statements and can be updated anytime after a cluster has been started. For more details, see [Cluster Settings](cluster-settings.html). -{{site.data.alerts.end}} - -## Synopsis - -Start a node to be part of a new multi-node cluster: - -~~~ shell -$ cockroach start -~~~ - -Initialize a new multi-node cluster: - -~~~ shell -$ cockroach init -~~~ - -Add a node to an existing cluster: - -~~~ shell -$ cockroach start -~~~ - -View help: - -~~~ shell -$ cockroach start --help -~~~ - -## Flags - -The `cockroach start` command supports the following [general-use](#general), [networking](#networking), [security](#security), and [logging](#logging) flags. - -Many flags have useful defaults that can be overridden by specifying the flags explicitly. If you specify flags explicitly, however, be sure to do so each time the node is restarted, as they will not be remembered. The one exception is the `--join` flag, which is stored in a node's data directory. We still recommend specifying the `--join` flag every time, as this will allow nodes to rejoin the cluster even if their data directory was destroyed. - -### General - -Flag | Description ------|----------- -`--attrs` | Arbitrary strings, separated by colons, specifying node capability, which might include specialized hardware or number of cores, for example:

`--attrs=ram:64gb`

These can be used to influence the location of data replicas. See [Configure Replication Zones](configure-replication-zones.html#replication-constraints) for full details. -`--background` | Runs the node in the background. Control is returned to the shell only once the node is ready to accept requests, so this is recommended over appending `&` to the command. This flag is **not** available in Windows environments.

**Note:** `--background` is suitable for writing automated test suites or maintenance procedures that need a temporary server process running in the background. It is not intended to be used to start a long-running server, because it does not fully detach from the controlling terminal. Consider using a service manager or a tool like [daemon(8)](https://www.freebsd.org/cgi/man.cgi?query=daemon&sektion=8) instead. -`--cache` | The total size for caches, shared evenly if there are multiple storage devices. This can be a percentage (notated as a decimal or with `%`) or any bytes-based unit; for example:

`--cache=.25`
`--cache=25%`
`--cache=1000000000 ----> 1000000000 bytes`
`--cache=1GB ----> 1000000000 bytes`
`--cache=1GiB ----> 1073741824 bytes`

**Note:** If you use the `%` notation, you might need to escape the `%` sign (for instance, while configuring CockroachDB through `systemd` service files). For this reason, it's recommended to use the decimal notation instead.

**Note:** The sum of `--cache`, `--max-sql-memory`, and `--max-tsdb-memory` should not exceed 75% of the memory available to the `cockroach` process.

**Default:** `128MiB`

The default cache size is reasonable for local development clusters. For production deployments, this should be increased to 25% or higher. Increasing the cache size will generally improve the node's read performance. For more details, see [Recommended Production Settings](recommended-production-settings.html#cache-and-sql-memory-size). -`--clock-device` | Enable CockroachDB to use a [PTP hardware clock](https://www.kernel.org/doc/html/latest/driver-api/ptp.html) when querying the current time. The value is a string that specifies the clock device to use. For example: `--clock-device=/dev/ptp0`

**Note:** This is supported on Linux only and may be needed in cases where the host clock is unreliable or prone to large jumps (e.g., when using vMotion). -`--cluster-name` | A string that specifies a cluster name. This is used together with `--join` to ensure that all newly created nodes join the intended cluster when you are running multiple clusters.

**Note:** If this is set, [`cockroach init`](cockroach-init.html), [`cockroach node decommission`](cockroach-node.html), [`cockroach node recommission`](cockroach-node.html), and the `cockroach debug` commands must specify either `--cluster-name` or `--disable-cluster-name-verification` in order to work. -`--disable-cluster-name-verification` | On clusters for which a cluster name has been set, this flag paired with `--cluster-name` disables the cluster name check for the command. This is necessary on existing clusters, when setting a cluster name or changing the cluster name: Perform a rolling restart of all nodes and include both the new `--cluster-name` value and `--disable-cluster-name-verification`, then a second rolling restart with `--cluster-name` and without `--disable-cluster-name-verification`. -`--external-io-dir` | The path of the external IO directory with which the local file access paths are prefixed while performing backup and restore operations using local node directories or NFS drives. If set to `disabled`, backups and restores using local node directories and NFS drives, as well as [`cockroach nodelocal upload`](cockroach-nodelocal-upload.html), are disabled.

**Default:** `extern` subdirectory of the first configured [`store`](#store).

To set the `--external-io-dir` flag to the locations you want to use without needing to restart nodes, create symlinks to the desired locations from within the `extern` directory. -`--listening-url-file` | The file to which the node's SQL connection URL will be written as soon as the node is ready to accept connections, in addition to being printed to the [standard output](#standard-output). When `--background` is used, this happens before the process detaches from the terminal.

This is particularly helpful in identifying the node's port when an unused port is assigned automatically (`--port=0`). -`--locality` | Arbitrary key-value pairs that describe the location of the node. Locality might include country, region, availability zone, etc. A `region` tier must be included in order to enable [multi-region capabilities](multiregion-overview.html). For more details, see [Locality](#locality) below. -`--max-disk-temp-storage` | The maximum on-disk storage capacity available to store temporary data for SQL queries that exceed the memory budget (see `--max-sql-memory`). This ensures that JOINs, sorts, and other memory-intensive SQL operations are able to spill intermediate results to disk. This can be a percentage (notated as a decimal or with `%`) or any bytes-based unit (e.g., `.25`, `25%`, `500GB`, `1TB`, `1TiB`).

Note: If you use the `%` notation, you might need to escape the `%` sign, for instance, while configuring CockroachDB through `systemd` service files. For this reason, it's recommended to use the decimal notation instead. Also, if expressed as a percentage, this value is interpreted relative to the size of the first store. However, the temporary space usage is never counted towards any store usage; therefore, when setting this value, it's important to ensure that the size of this temporary storage plus the size of the first store doesn't exceed the capacity of the storage device.

The temporary files are located in the path specified by the `--temp-dir` flag, or in the subdirectory of the first store (see `--store`) by default.

**Default:** `32GiB` -`--max-offset` | The maximum allowed clock offset for the cluster. If observed clock offsets exceed this limit, servers will crash to minimize the likelihood of reading inconsistent data. Increasing this value will increase the time to recovery of failures as well as the frequency of uncertainty-based read restarts.

Note that this value must be the same on all nodes in the cluster and cannot be changed with a [rolling upgrade](upgrade-cockroach-version.html). In order to change it, first stop every node in the cluster. Then once the entire cluster is offline, restart each node with the new value.

**Default:** `500ms` -`--max-sql-memory` | The maximum in-memory storage capacity available to store temporary data for SQL queries, including prepared queries and intermediate data rows during query execution. This can be a percentage (notated as a decimal or with `%`) or any bytes-based unit; for example:

`--max-sql-memory=.25`
`--max-sql-memory=25%`
`--max-sql-memory=10000000000 ----> 1000000000 bytes`
`--max-sql-memory=1GB ----> 1000000000 bytes`
`--max-sql-memory=1GiB ----> 1073741824 bytes`

The temporary files are located in the path specified by the `--temp-dir` flag, or in the subdirectory of the first store (see `--store`) by default.

**Note:** If you use the `%` notation, you might need to escape the `%` sign (for instance, while configuring CockroachDB through `systemd` service files). For this reason, it's recommended to use the decimal notation instead.

**Note:** The sum of `--cache`, `--max-sql-memory`, and `--max-tsdb-memory` should not exceed 75% of the memory available to the `cockroach` process.

**Default:** `25%`

The default SQL memory size is suitable for production deployments but can be raised to increase the number of simultaneous client connections the node allows as well as the node's capacity for in-memory processing of rows when using `ORDER BY`, `GROUP BY`, `DISTINCT`, joins, and window functions. For local development clusters with memory-intensive workloads, reduce this value to, for example, `128MiB` to prevent [out-of-memory errors](cluster-setup-troubleshooting.html#out-of-memory-oom-crash). -`--max-tsdb-memory` | Maximum memory capacity available to store temporary data for use by the time-series database to display metrics in the [DB Console](ui-overview.html). Consider raising this value if your cluster is comprised of a large number of nodes where individual nodes have very limited memory available (e.g., under `8 GiB`). Insufficient memory capacity for the time-series database can constrain the ability of the DB Console to process the time-series queries used to render metrics for the entire cluster. This capacity constraint does not affect SQL query execution. This flag accepts numbers interpreted as bytes, size suffixes (e.g., `1GB` and `1GiB`) or a percentage of physical memory (e.g., `0.01`).

**Note:** The sum of `--cache`, `--max-sql-memory`, and `--max-tsdb-memory` should not exceed 75% of the memory available to the `cockroach` process.

**Default:** `0.01` (i.e., 1%) of physical memory or `64 MiB`, whichever is greater. -`--pid-file` | The file to which the node's process ID will be written as soon as the node is ready to accept connections. When `--background` is used, this happens before the process detaches from the terminal. When this flag is not set, the process ID is not written to file. - `--store`
`-s` | The file path to a storage device and, optionally, store attributes and maximum size. When using multiple storage devices for a node, this flag must be specified separately for each device, for example:

`--store=/mnt/ssd01 --store=/mnt/ssd02`

For more details, see [Store](#store) below. -`--spatial-libs` | The location on disk where CockroachDB looks for [spatial](spatial-features.html) libraries.

**Defaults:**
  • `/usr/local/lib/cockroach`
  • A `lib` subdirectory of the CockroachDB binary's current directory.

-`--temp-dir` | The path of the node's temporary store directory. The temporary store directory is used primarily as working memory for distributed computations and importing from CSV data sources. On node start-up, the location for the temporary files is printed to the standard output.

**Default:** Subdirectory of the first [store](#store) - -### Networking - -Flag | Description ------|----------- -`--experimental-dns-srv` | When this flag is included, the node will first attempt to fetch SRV records from DNS for every name specified with `--join`. If a valid SRV record is found, that information is used instead of regular DNS A/AAAA lookups. This feature is experimental and may be removed or modified in a later version. -`--listen-addr` | The IP address/hostname and port to listen on for connections from other nodes and clients. For IPv6, use the notation `[...]`, e.g., `[::1]` or `[fe80::f6f2:::]`.

This flag's effect depends on how it is used in combination with `--advertise-addr`. For example, the node will also advertise itself to other nodes using this value if `--advertise-addr` is not specified. For more details, see [Networking](recommended-production-settings.html#networking).

**Default:** Listen on all IP addresses on port `26257`; if `--advertise-addr` is not specified, also advertise the node's canonical hostname to other nodes -`--advertise-addr` | The IP address/hostname and port to tell other nodes to use. If using a hostname, it must be resolvable from all nodes. If using an IP address, it must be routable from all nodes; for IPv6, use the notation `[...]`, e.g., `[::1]` or `[fe80::f6f2:::]`.

This flag's effect depends on how it is used in combination with `--listen-addr`. For example, if the port number is different than the one used in `--listen-addr`, port forwarding is required. For more details, see [Networking](recommended-production-settings.html#networking).

**Default:** The value of `--listen-addr`; if `--listen-addr` is not specified, advertises the node's canonical hostname and port `26257` -`--http-addr` | The IP address/hostname and port to listen on for DB Console HTTP requests. For IPv6, use the notation `[...]`, e.g., `[::1]:8080` or `[fe80::f6f2:::]:8080`.

**Default:** Listen on the address part of `--listen-addr` on port `8080` -`--locality-advertise-addr` | The IP address/hostname and port to tell other nodes in specific localities to use. This flag is useful when running a cluster across multiple networks, where nodes in a given network have access to a private or local interface while nodes outside the network do not. In this case, you can use `--locality-advertise-addr` to tell nodes within the same network to prefer the private or local address to improve performance and use `--advertise-addr` to tell nodes outside the network to use another address that is reachable from them.

This flag relies on nodes being started with the [`--locality`](#locality) flag and uses the `locality@address` notation, for example:

`--locality-advertise-addr=region=us-west@10.0.0.0:26257`

See the [example](#start-a-multi-node-cluster-across-private-networks) below for more details. -`--sql-addr` | The IP address/hostname and port to listen on for SQL connections from clients. For IPv6, use the notation `[...]`, e.g., `[::1]` or `[fe80::f6f2:::]`.

This flag's effect depends on how it is used in combination with `--advertise-sql-addr`. For example, the node will also advertise itself to clients using this value if `--advertise-sql-addr` is not specified.

**Default:** The value of `--listen-addr`; if `--listen-addr` is not specified, advertises the node's canonical hostname and port `26257`

For an example, see [Start a cluster with separate RPC and SQL networks](#start-a-cluster-with-separate-rpc-and-sql-networks) -`--advertise-sql-addr` | The IP address/hostname and port to tell clients to use. If using a hostname, it must be resolvable from all nodes. If using an IP address, it must be routable from all nodes; for IPv6, use the notation `[...]`, e.g., `[::1]` or `[fe80::f6f2:::]`.

This flag's effect depends on how it is used in combination with `--sql-addr`. For example, if the port number is different than the one used in `--sql-addr`, port forwarding is required.

**Default:** The value of `--sql-addr`; if `--sql-addr` is not specified, advertises the value of `--listen-addr` -`--join`
`-j` | The host addresses that connect nodes to the cluster and distribute the rest of the node addresses. These can be IP addresses or DNS aliases of nodes.

When starting a cluster in a single region, specify the addresses of 3-5 initial nodes. When starting a cluster in multiple regions, specify more than 1 address per region, and select nodes that are spread across failure domains. Then run the [`cockroach init`](cockroach-init.html) command against any of these nodes to complete cluster startup. See the [example](#start-a-multi-node-cluster) below for more details.

Use the same `--join` list for all nodes to ensure that the cluster can stabilize. Do not list every node in the cluster, because this increases the time for a new cluster to stabilize. Note that these are best practices; it is not required to restart an existing node to update its `--join` flag.

`cockroach start` must be run with the `--join` flag. To start a single-node cluster, use `cockroach start-single-node` instead. -`--socket-dir` | The directory path on which to listen for [Unix domain socket](https://en.wikipedia.org/wiki/Unix_domain_socket) connections from clients installed on the same Unix-based machine. For an example, see [Connect to a cluster listening for Unix domain socket connections](cockroach-sql.html#connect-to-a-cluster-listening-for-unix-domain-socket-connections). -`--advertise-host` | **Deprecated.** Use `--advertise-addr` instead. -`--host` | **Deprecated.** Use `--listen-addr` instead. -`--port`
`-p` | **Deprecated.** Specify port in `--advertise-addr` and/or `--listen-addr` instead. -`--http-host` | **Deprecated.** Use `--http-addr` instead. -`--http-port` | **Deprecated.** Specify port in `--http-addr` instead. - -### Security - -Flag | Description ------|----------- -`--certs-dir` | The path to the [certificate directory](cockroach-cert.html). The directory must contain valid certificates if running in secure mode.

**Default:** `${HOME}/.cockroach-certs/` -`--insecure` | **Note:** The `--insecure` flag is intended for **non-production testing only**.

Run in insecure mode, skipping all TLS encryption and authentication. If this flag is not set, the `--certs-dir` flag must point to valid certificates.

**Note the following risks:** An insecure cluster is open to any client that can access any node's IP addresses; client connections must also be made insecurely; any user, even `root`, can log in without providing a password; any user, connecting as `root`, can read or write any data in your cluster; there is no network encryption or authentication, and thus no confidentiality.

**Default:** `false` -`--accept-sql-without-tls` | This flag (in [preview](cockroachdb-feature-availability.html)) allows you to connect to the cluster using a SQL user's password without [validating the client's certificate](authentication.html#client-authentication). When connecting using the built-in SQL client, [use the `--insecure` flag with the `cockroach sql` command](cockroach-sql.html#client-connection). -`--cert-principal-map` | A comma-separated list of `cert-principal:db-principal` mappings used to map the certificate principals to IP addresses, DNS names, and SQL users. This allows the use of certificates generated by Certificate Authorities that place restrictions on the contents of the `commonName` field. For usage information, see [Create Security Certificates using Openssl](create-security-certificates-openssl.html#examples). -`--enterprise-encryption` | This optional flag specifies the encryption options for one of the stores on the node. If multiple stores exist, the flag must be specified for each store.

This flag takes a number of options. For a complete list of options, and usage instructions, see [Encryption at Rest](encryption.html).

Note that this is an [Enterprise feature](enterprise-licensing.html). -`--external-io-disable-http` | This optional flag disables external HTTP(S) access (as well as custom HTTP(S) endpoints) when performing bulk operations (e.g, [`BACKUP`](backup.html), [`IMPORT`](import.html), etc.). This can be used in environments where you cannot run a full proxy server.

If you want to run a proxy server, you can start CockroachDB while specifying the `HTTP(S)_PROXY` environment variable. -`--external-io-disable-implicit-credentials` | This optional flag disables the use of implicit credentials when accessing external cloud storage services for bulk operations (e.g, [`BACKUP`](backup.html), [`IMPORT`](import.html), etc.). - -### Locality - -The `--locality` flag accepts arbitrary key-value pairs that describe the location of the node. Locality should include a `region` key-value if you are using CockroachDB's [Multi-region SQL capabilities](multiregion-overview.html). - -Depending on your deployment you can also specify country, availability zone, etc. The key-value pairs should be ordered into _locality tiers_ from most inclusive to least inclusive (e.g., region before availability zone as in `region=eu-west-1,zone=eu-west-1a`), and the keys and order of key-value pairs must be the same on all nodes. It's typically better to include more pairs than fewer. - -- Specifying a region with a `region` tier is required in order to enable CockroachDB's [multi-region capabilities](multiregion-overview.html). - -- CockroachDB spreads the replicas of each piece of data across as diverse a set of localities as possible, with the order determining the priority. Locality can also be used to influence the location of data replicas in various ways using high-level [multi-region SQL capabilities](multiregion-overview.html) or low-level [replication zones](configure-replication-zones.html#replication-constraints). - -- When there is high latency between nodes (e.g., cross-availability zone deployments), CockroachDB uses locality to move range leases closer to the current workload, reducing network round trips and improving read performance, also known as ["follow-the-workload"](topology-follow-the-workload.html). In a deployment across more than 3 availability zones, however, to ensure that all data benefits from "follow-the-workload", you must increase your replication factor to match the total number of availability zones. - -- Locality is also a prerequisite for using the [Multi-region SQL abstractions](multiregion-overview.html), [table partitioning](partitioning.html), and [**Node Map**](enable-node-map.html) {{site.data.products.enterprise}} features. - - - -#### Example - -The following shell commands use the `--locality` flag to start 9 nodes to run across 3 regions: `us-east-1`, `us-west-1`, and `europe-west-1`. Each region's nodes are further spread across different availability zones within that region. - -{{site.data.alerts.callout_info}} -This example follows the conventions required to use CockroachDB's [multi-region capabilities](multiregion-overview.html). -{{site.data.alerts.end}} - -Nodes in `us-east-1`: - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach start --locality=region=us-east-1,zone=us-east-1a # ... other required flags go here -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach start --locality=region=us-east-1,zone=us-east-1b # ... other required flags go here -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach start --locality=region=us-east-1,zone=us-east-1c # ... other required flags go here -~~~ - -Nodes in `us-west-1`: - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach start --locality=region=us-west-1,zone=us-west-1a # ... other required flags go here -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach start --locality=region=us-west-1,zone=us-west-1b # ... other required flags go here -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach start --locality=region=us-west-1,zone=us-west-1c # ... other required flags go here -~~~ - -Nodes in `europe-west-1`: - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach start --locality=region=europe-west-1,zone=europe-west-1a # ... other required flags go here -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach start --locality=region=europe-west-1,zone=europe-west-1b # ... other required flags go here -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach start --locality=region=europe-west-1,zone=europe-west-1c # ... other required flags go here -~~~ - -For another multi-region example, see [Start a multi-region cluster](#start-a-multi-region-cluster). - -For more information about how to use CockroachDB's multi-region capabilities, see the [Multi-Region Capabilities Overview](multiregion-overview.html). - -### Storage - -#### Storage engine - -The `--storage-engine` flag is used to choose the storage engine used by the node. Note that this setting applies to all [stores](#store) on the node, including the [temp store](#temp-dir). - - As of v21.1 and later, CockroachDB always uses the [Pebble storage engine](architecture/storage-layer.html#pebble). As such, `pebble` is the default and only option for the `--storage-engine` flag. - -#### Store - -The `--store` flag supports the following fields. Note that commas are used to separate fields, and so are forbidden in all field values. - -{{site.data.alerts.callout_info}} -In-memory storage is not suitable for production deployments at this time. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_success}} -{% include {{ page.version.version }}/prod-deployment/prod-guidance-store-volume.md %} -{{site.data.alerts.end}} - -Field | Description -------|------------ -`type` | For in-memory storage, set this field to `mem`; otherwise, leave this field out. The `path` field must not be set when `type=mem`. -`path` | The file path to the storage device. When not setting `attr`, `size`, or `ballast-size`, the `path` field label can be left out:

`--store=/mnt/ssd01`

When either of those fields are set, however, the `path` field label must be used:

`--store=path=/mnt/ssd01,size=20GB`

**Default:** `cockroach-data` -`attrs` | Arbitrary strings, separated by colons, specifying disk type or capability. These can be used to influence the location of data replicas. See [Configure Replication Zones](configure-replication-zones.html#replication-constraints) for full details.

In most cases, node-level `--locality` or `--attrs` are preferable to store-level attributes, but this field can be used to match capabilities for storage of individual databases or tables. For example, an OLTP database would probably want to allocate space for its tables only on solid state devices, whereas append-only time series might prefer cheaper spinning drives. Typical attributes include whether the store is flash (`ssd`) or spinny disk (`hdd`), as well as speeds and other specs, for example:

`--store=path=/mnt/hda1,attrs=hdd:7200rpm` - `size` | The maximum size allocated to the node. When this size is reached, CockroachDB attempts to rebalance data to other nodes with available capacity. When no other nodes have available capacity, this limit will be exceeded. Data may also be written to the node faster than the cluster can rebalance it away; as long as capacity is available elsewhere, CockroachDB will gradually rebalance data down to the store limit.

The `size` can be specified either in a bytes-based unit or as a percentage of hard drive space (notated as a decimal or with `%`), for example:

`--store=path=/mnt/ssd01,size=10000000000 ----> 10000000000 bytes`
`--store=path=/mnt/ssd01,size=20GB ----> 20000000000 bytes`
`--store=path=/mnt/ssd01,size=20GiB ----> 21474836480 bytes`
`--store=path=/mnt/ssd01,size=0.02TiB ----> 21474836480 bytes`
`--store=path=/mnt/ssd01,size=20% ----> 20% of available space`
`--store=path=/mnt/ssd01,size=0.2 ----> 20% of available space`
`--store=path=/mnt/ssd01,size=.2 ----> 20% of available space`

**Default:** 100%

For an in-memory store, the `size` field is required and must be set to the true maximum bytes or percentage of available memory, for example:

`--store=type=mem,size=20GB`
`--store=type=mem,size=90%`

Note: If you use the `%` notation, you might need to escape the `%` sign, for instance, while configuring CockroachDB through `systemd` service files. For this reason, it's recommended to use the decimal notation instead. - `ballast-size` | Configure the size of the automatically created emergency ballast file. Accepts the same value formats as the [`size` field](#store-size). For more details, see [Automatic ballast files](cluster-setup-troubleshooting.html#automatic-ballast-files).

To disable automatic ballast file creation, set the value to `0`:

`--store=path=/mnt/ssd01,ballast-size=0` - -### Logging - -By [default](configure-logs.html#default-logging-configuration), `cockroach start` writes all messages to log files, and prints nothing to `stderr`. This includes events with `INFO` [severity](logging.html#logging-levels-severities) and higher. However, you can [customize the logging behavior](configure-logs.html) of this command by using the `--log` flag: - -{% include {{ page.version.version }}/misc/logging-flags.md %} - -#### Defaults - -See the [default logging configuration](configure-logs.html#default-logging-configuration). - -## Standard output - -When you run `cockroach start`, some helpful details are printed to the standard output: - -~~~ shell -CockroachDB node starting at {{ now | date: "%Y-%m-%d %H:%M:%S.%6 +0000 UTC" }} -build: CCL {{page.release_info.version}} @ {{page.release_info.build_time}} (go1.12.6) -webui: http://localhost:8080 -sql: postgresql://root@localhost:26257?sslmode=disable -sql (JDBC): jdbc:postgresql://localhost:26257/defaultdb?sslmode=disable&user=root -RPC client flags: cockroach --host=localhost:26257 --insecure -logs: /Users//node1/logs -temp dir: /Users//node1/cockroach-temp242232154 -external I/O path: /Users//node1/extern -store[0]: path=/Users//node1 -status: initialized new cluster -clusterID: 8a681a16-9623-4fc1-a537-77e9255daafd -nodeID: 1 -~~~ - -{{site.data.alerts.callout_success}} -These details are also written to the `INFO` log in the `/logs` directory. You can retrieve them with a command like `grep 'node starting' node1/logs/cockroach.log -A 11`. -{{site.data.alerts.end}} - -Field | Description -------|------------ -`build` | The version of CockroachDB you are running. -`webui` | The URL for accessing the DB Console. -`sql` | The connection URL for your client. -`RPC client flags` | The flags to use when connecting to the node via [`cockroach` client commands](cockroach-commands.html). -`logs` | The directory containing debug log data. -`temp dir` | The temporary store directory of the node. -`external I/O path` | The external IO directory with which the local file access paths are prefixed while performing [backup](backup.html) and [restore](restore.html) operations using local node directories or NFS drives. -`attrs` | If node-level attributes were specified in the `--attrs` flag, they are listed in this field. These details are potentially useful for [configuring replication zones](configure-replication-zones.html). -`locality` | If values describing the locality of the node were specified in the `--locality` field, they are listed in this field. These details are potentially useful for [configuring replication zones](configure-replication-zones.html). -`store[n]` | The directory containing store data, where `[n]` is the index of the store, e.g., `store[0]` for the first store, `store[1]` for the second store.

If store-level attributes were specified in the `attrs` field of the [`--store`](#store) flag, they are listed in this field as well. These details are potentially useful for [configuring replication zones](configure-replication-zones.html). -`status` | Whether the node is the first in the cluster (`initialized new cluster`), joined an existing cluster for the first time (`initialized new node, joined pre-existing cluster`), or rejoined an existing cluster (`restarted pre-existing node`). -`clusterID` | The ID of the cluster.

When trying to join a node to an existing cluster, if this ID is different than the ID of the existing cluster, the node has started a new cluster. This may be due to conflicting information in the node's data directory. For additional guidance, see the [troubleshooting](common-errors.html#node-belongs-to-cluster-cluster-id-but-is-attempting-to-connect-to-a-gossip-network-for-cluster-another-cluster-id) docs. -`nodeID` | The ID of the node. - -## Examples - -### Start a multi-node cluster - -
- - -
- -To start a multi-node cluster, run the `cockroach start` command for each node, setting the `--join` flag to the addresses of the initial nodes. - -{% include {{ page.version.version }}/prod-deployment/join-flag-single-region.md %} - -{{site.data.alerts.callout_info}} -{% include {{ page.version.version }}/prod-deployment/join-flag-multi-region.md %} -{{site.data.alerts.end}} - -
- -{{site.data.alerts.callout_success}} -Before starting the cluster, use [`cockroach cert`](cockroach-cert.html) to generate node and client certificates for a secure cluster connection. -{{site.data.alerts.end}} - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---certs-dir=certs \ ---advertise-addr= \ ---join=,, \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---certs-dir=certs \ ---advertise-addr= \ ---join=,, \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---certs-dir=certs \ ---advertise-addr= \ ---join=,, \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ - -
- -
- -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---advertise-addr= \ ---join=,, \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---advertise-addr= \ ---join=,, \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---advertise-addr= \ ---join=,, \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ - -
- -Then run the [`cockroach init`](cockroach-init.html) command against any node to perform a one-time cluster initialization: - -
- -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach init \ ---certs-dir=certs \ ---host=
-~~~ - -
- -
- -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach init \ ---insecure \ ---host=
-~~~ - -
- -### Start a multi-region cluster - -In this example we will start a multi-node [local cluster](start-a-local-cluster.html) with a multi-region setup that uses the same regions (passed to the [`--locality`](#locality) flag) as the [multi-region MovR demo application](demo-low-latency-multi-region-deployment.html). - -First, start a node in the `us-east1` region: - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach start --locality=region=us-east1,zone=us-east-1a --insecure --store=/tmp/node0 --listen-addr=localhost:26257 --http-port=8888 --join=localhost:26257,localhost:26258,localhost:26259 --background -~~~ - -Next, start a node in the `us-west1` region: - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach start --locality=region=us-west1,zone=us-west-1a --insecure --store=/tmp/node2 --listen-addr=localhost:26259 --http-port=8890 --join=localhost:26257,localhost:26258,localhost:26259 --background -~~~ - -Next, start a node in the `europe-west1` region: - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach start --locality=region=europe-west1,zone=europe-west-1a --insecure --store=/tmp/node1 --listen-addr=localhost:26258 --http-port=8889 --join=localhost:26257,localhost:26258,localhost:26259 --background -~~~ - -Next, initialize the cluster: - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach init --insecure --host=localhost --port=26257 -~~~ - -Next, connect to the cluster using [`cockroach sql`](cockroach-sql.html): - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach sql --host=localhost --port=26257 --insecure -~~~ - -Finally, issue the [`SHOW REGIONS`](show-regions.html) statement to verify that the list of regions is expected. - -{% include_cached copy-clipboard.html %} -~~~ sql -SHOW REGIONS; -~~~ - -~~~ - region | zones | database_names | primary_region_of ----------------+-------+----------------+-------------------- - europe-west1 | {} | {} | {} - us-east1 | {} | {} | {} - us-west1 | {} | {} | {} -(3 rows) -~~~ - -For more information about running CockroachDB multi-region, see the [Multi-region Capabilities Overview](multiregion-overview.html). - -For a more advanced example showing how to run a simulated workload on a multi-region CockroachDB cluster on your local machine, see [Low Latency Reads and Writes in a Multi-Region Cluster](demo-low-latency-multi-region-deployment.html). - -{{site.data.alerts.callout_info}} -For more information about the `--locality` flag, see [Locality](#locality). -{{site.data.alerts.end}} - -### Start a multi-node cluster across private networks - -**Scenario:** - -- You have a cluster that spans GCE and AWS. -- The nodes on each cloud can reach each other on public addresses, but the private addresses aren't reachable from the other cloud. - -**Approach:** - -1. Start each node on GCE with `--locality` set to describe its location, `--locality-advertise-addr` set to advertise its private address to other nodes in on GCE, `--advertise-addr` set to advertise its public address to nodes on AWS, and `--join` set to the public addresses of 3-5 of the initial nodes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --certs-dir=certs \ - --locality=cloud=gce \ - --locality-advertise-addr=cloud=gce@ \ - --advertise-addr= \ - --join=,, \ - --cache=.25 \ - --max-sql-memory=.25 - ~~~ - -2. Start each node on AWS with `--locality` set to describe its location, `--locality-advertise-addr` set to advertise its private address to other nodes on AWS, `--advertise-addr` set to advertise its public address to nodes on GCE, and `--join` set to the public addresses of 3-5 of the initial nodes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --certs-dir=certs \ - --locality=cloud=aws \ - --locality-advertise-addr=cloud=aws@ \ - --advertise-addr= \ - --join=,, \ - --cache=.25 \ - --max-sql-memory=.25 - ~~~ - -3. Run the [`cockroach init`](cockroach-init.html) command against any node to perform a one-time cluster initialization: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach init \ - --certs-dir=certs \ - --host=
- ~~~ - -### Add a node to a cluster - -
- - -
- -To add a node to an existing cluster, run the `cockroach start` command, setting the `--join` flag to the same addresses you used when [starting the cluster](#start-a-multi-node-cluster): - -
- -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---certs-dir=certs \ ---advertise-addr= \ ---join=,, \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ - -
- -
- -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---advertise-addr= \ ---join=,, \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ - -
- -### Create a table with node locality information - -Start a three-node cluster with locality information specified in the `cockroach start` commands: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start --insecure --port=26257 --http-port=26258 --store=cockroach-data/1 --cache=256MiB --locality=region=eu-west-1,cloud=aws,zone=eu-west-1a -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start --insecure --port=26259 --http-port=26260 --store=cockroach-data/2 --cache=256MiB --join=localhost:26257 --locality=region=eu-west-1,cloud=aws,zone=eu-west-1b -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start --insecure --port=26261 --http-port=26262 --store=cockroach-data/3 --cache=256MiB --join=localhost:26257 --locality=region=eu-west-1,cloud=aws,zone=eu-west-1c -~~~ - -You can use the [`crdb_internal.locality_value`](functions-and-operators.html#system-info-functions) built-in function to return the current node's locality information from inside a SQL shell. The example below uses the output of `crdb_internal.locality_value('zone')` as the `DEFAULT` value to use for the `zone` column of new rows. Other available locality keys for the running three-node cluster include `region` and `cloud`. - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE charges ( - zone STRING NOT NULL DEFAULT crdb_internal.locality_value('zone'), - id INT PRIMARY KEY NOT NULL -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO charges (id) VALUES (1); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM charges WHERE id = 1; -~~~ - -~~~ - zone | id -+------------+----+ - eu-west-1a | 1 -(1 row) -~~~ - -The `zone ` column has the zone of the node on which the row was created. - -In a separate terminal window, open a SQL shell to a different node on the cluster: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --port 26259 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO charges (id) VALUES (2); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM charges WHERE id = 2; -~~~ - -~~~ - zone | id -+------------+----+ - eu-west-1b | 2 -(1 row) -~~~ - -In a separate terminal window, open a SQL shell to the third node: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --port 26261 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO charges (id) VALUES (3); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM charges WHERE id = 3; -~~~ - -~~~ - zone | id -+------------+----+ - eu-west-1c | 3 -(1 row) -~~~ - -### Start a cluster with separate RPC and SQL networks - -Separating the network addresses used for intra-cluster RPC traffic and application SQL connections can provide an additional level of protection against security issues as a form of [defense in depth](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)). This separation is accomplished with a combination of the [`--sql-addr` flag](#networking) and firewall rules or other network-level access control (which must be maintained outside of CockroachDB). - -For example, suppose you want to use port `26257` for SQL connections and `26258` for intra-cluster traffic. Set up firewall rules so that the CockroachDB nodes can reach each other on port `26258`, but other machines cannot. Start the CockroachDB processes as follows: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start --sql-addr=:26257 --listen-addr=:26258 --join=node1:26258,node2:26258,node3:26258 --certs-dir=~/cockroach-certs -~~~ - -Note the use of port `26258` (the value for `listen-addr`, not `sql-addr`) in the `--join` flag. Also, if your environment requires the use of the `--advertise-addr` flag, you should probably also use the `--advertise-sql-addr` flag when using a separate SQL address. - -Clusters using this configuration with client certificate authentication may also wish to use [split client CA certificates]({% link {{ page.version.version }}/create-security-certificates-custom-ca.md %}#split-ca-certificates). - -## See also - -- [Initialize a Cluster](cockroach-init.html) -- [Manual Deployment](manual-deployment.html) -- [Orchestrated Deployment](kubernetes-overview.html) -- [Local Deployment](start-a-local-cluster.html) -- [`cockroach` Commands Overview](cockroach-commands.html) diff --git a/src/current/v22.1/cockroach-statement-diag.md b/src/current/v22.1/cockroach-statement-diag.md deleted file mode 100644 index da73fff02fe..00000000000 --- a/src/current/v22.1/cockroach-statement-diag.md +++ /dev/null @@ -1,146 +0,0 @@ ---- -title: cockroach statement-diag -summary: Use statement-diag to manage and download statement diagnostics bundles. -toc: true -docs_area: reference.cli ---- - -Use the `cockroach statement-diag` [command](cockroach-commands.html) to manage and download statement diagnostics bundles generated from the [DB Console](ui-statements-page.html#diagnostics) or [`EXPLAIN ANALYZE (DEBUG)`](explain-analyze.html#explain-analyze-debug). - -## Required privileges - -Only members of the `admin` role can run `cockroach statement-diag`. By default, the `root` user belongs to the `admin` role. - -## Subcommands - -Subcommand | Usage ------------|------ -`list` | List available statement diagnostics bundles and outstanding activation requests. -`download` | Download a specified diagnostics bundle into a `.zip` file. -`delete` | Delete a statement diagnostics bundle(s). -`cancel` | Cancel an outstanding activation request(s). - -## Synopsis - -List available statement diagnostics bundles and outstanding activation requests: - -~~~ shell -$ cockroach statement-diag list -~~~ - -Download a specified diagnostics bundle into a `.zip` file: - -~~~ shell -$ cockroach statement-diag download [] -~~~ - -Delete a statement diagnostics bundle: - -~~~ shell -$ cockroach statement-diag delete -~~~ - -Delete all statement diagnostics bundles: - -~~~ shell -$ cockroach statement-diag delete --all -~~~ - -Cancel an outstanding activation request: - -~~~ shell -$ cockroach statement-diag cancel -~~~ - -Cancel all outstanding activation requests: - -~~~ shell -$ cockroach statement-diag cancel --all -~~~ - -## Flags - -- The `delete` and `cancel` subcommands support one [general-use](#general) flag. -- All `statement-diag` subcommands support several [client connection](#client-connection) and [logging](#logging) flags. - -### General - -Flag | Description ------|------------ -`--all` | Apply to all bundles or activation requests. - -### Client connection - -{% include {{ page.version.version }}/sql/connection-parameters.md %} - -See [Client Connection Parameters](connection-parameters.html) for more details. - -### Logging - -{% include {{ page.version.version }}/misc/logging-defaults.md %} - -## Examples - -### Setup - -These examples assume you are running [an insecure cluster](start-a-local-cluster.html) and have requested and/or generated statement diagnostics bundles using the [DB Console](ui-statements-page.html#diagnostics) or [`EXPLAIN ANALYZE (DEBUG)`](explain-analyze.html#explain-analyze-debug). - -### Download a statement diagnostics bundle - -List statement diagnostics bundles and/or activation requests: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach statement-diag list --insecure -~~~ - -~~~ -Statement diagnostics bundles: - ID Collection time Statement - 603820372518502401 2020-11-02 18:29:13 UTC CREATE DATABASE bank - -Outstanding activation requests: - ID Activation time Statement - 603811900498804737 2020-11-02 17:46:08 UTC SELECT * FROM bank.accounts -~~~ - -Download a statement diagnostics bundle to `bundle.zip`: - -~~~ shell -$ cockroach statement-diag download 603820372518502401 bundle.zip --insecure -~~~ - -### Delete all statement diagnostics bundles - -Delete all statement diagnostics bundles: - -~~~ shell -$ cockroach statement-diag delete --all --insecure -~~~ - -### Cancel an activation request - -List statement diagnostics bundles and/or activation requests: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach statement-diag list --insecure -~~~ - -~~~ -Outstanding activation requests: - ID Activation time Statement - 603811900498804737 2020-11-02 17:46:08 UTC SELECT * FROM bank.accounts -~~~ - -### Delete an activation request - -~~~ shell -$ cockroach statement-diag cancel 603811900498804737 --insecure -~~~ - -## See also - -- [DB Console statement diagnostics](ui-statements-page.html#diagnostics) -- [`EXPLAIN ANALYZE (DEBUG)`](explain-analyze.html#explain-analyze-debug) -- [`cockroach` Commands Overview](cockroach-commands.html) diff --git a/src/current/v22.1/cockroach-userfile-delete.md b/src/current/v22.1/cockroach-userfile-delete.md deleted file mode 100644 index 3247957dd09..00000000000 --- a/src/current/v22.1/cockroach-userfile-delete.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: cockroach userfile delete -summary: The cockroach userfile delete command deletes files stored in user-scoped file storage. -toc: true -docs_area: reference.cli ---- - - The `cockroach userfile delete` [command](cockroach-commands.html) deletes the files stored in the [user-scoped file storage](use-userfile-for-bulk-operations.html) which match the [provided pattern](cockroach-userfile-upload.html#file-destination), using a SQL connection. If the pattern `'*'` is passed, all files in the specified (or default, if unspecified) user-scoped file storage will be deleted. Deletions are not atomic, and all deletions prior to the first failure will occur. - -## Required privileges - -The user must have the `CREATE` [privilege](security-reference/authorization.html#managing-privileges) on the target database. CockroachDB will proactively grant the user `GRANT`, `SELECT`, `INSERT`, `DROP`, `DELETE` on the metadata and file tables. - -A user can only delete files from their own user-scoped storage, which is accessed through the [userfile URI](cockroach-userfile-upload.html#file-destination) used during the upload. CockroachDB will revoke all access from every other user in the cluster except users in the `admin` role. Users in the `admin` role can delete from any user's storage. - -## Synopsis - -Delete a file: - -~~~ shell -$ cockroach userfile delete [flags] -~~~ - -View help: - -~~~ shell -$ cockroach userfile delete --help -~~~ - -## Flags - - Flag | Description ------------------+----------------------------------------------------- -`--cert-principal-map` | A comma-separated list of `:` mappings. This allows mapping the principal in a cert to a DB principal such as `node` or `root` or any SQL user. This is intended for use in situations where the certificate management system places restrictions on the `Subject.CommonName` or `SubjectAlternateName` fields in the certificate (e.g., disallowing a `CommonName` like `node` or `root`). If multiple mappings are provided for the same ``, the last one specified in the list takes precedence. A principal not specified in the map is passed through as-is via the identity function. A cert is allowed to authenticate a DB principal if the DB principal name is contained in the mapped `CommonName` or DNS-type `SubjectAlternateName` fields. -`--certs-dir` | The path to the [certificate directory](cockroach-cert.html) containing the CA and client certificates and client key.

**Env Variable:** `COCKROACH_CERTS_DIR`
**Default:** `${HOME}/.cockroach-certs/` -`--echo-sql` | Reveal the SQL statements sent implicitly by the command-line utility. -`--url` | A [connection URL](connection-parameters.html#connect-using-a-url) to use instead of the other arguments.

**Env Variable:** `COCKROACH_URL`
**Default:** no URL -`--user`
`-u` | The [SQL user](create-user.html) that will own the client session.

**Env Variable:** `COCKROACH_USER`
**Default:** `root` - -## Examples - -### Delete all files in the default storage - -To delete all files in the directory, pass the `'*'` pattern: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach userfile delete '*' --certs-dir=certs -~~~ - -~~~ -deleted userfile://defaultdb.public.userfiles_root/test-data-2.csv -deleted userfile://defaultdb.public.userfiles_root/test-data.csv -deleted userfile://defaultdb.public.userfiles_root/test-upload/test-data.csv -~~~ - -Note that because a fully qualified userfile URI was not specified, files in the default user-scoped storage (`userfile://defaultdb.public.userfiles_$user/`) were deleted. - -### Delete a specific file - -To delete a specific file, include the file destination in the command: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach userfile delete test-data.csv --certs-dir=certs -~~~ - -~~~ -deleted userfile://defaultdb.public.userfiles_root/test-data.csv -~~~ - -### Delete files that match the provided pattern - -To delete all files that match a pattern, use `*`: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach userfile delete '*.csv' --certs-dir=certs -~~~ - -~~~ -deleted userfile://defaultdb.public.userfiles_root/test-data-2.csv -deleted userfile://defaultdb.public.userfiles_root/test-data.csv -~~~ - -### Delete files from a non-default userfile URI - -If you [uploaded a file to a non-default userfile URI](cockroach-userfile-upload.html#upload-a-file-to-a-non-default-userfile-uri) (e.g., `userfile://testdb.public.uploads`): - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach userfile upload /Users/maxroach/Desktop/test-data.csv userfile://testdb.public.uploads/test-data.csv -~~~ - -Use the same URI to delete it: - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach userfile delete userfile://testdb.public.uploads -~~~ - -## See also - -- [`cockroach userfile upload`](cockroach-userfile-upload.html) -- [`cockroach userfile list`](cockroach-userfile-list.html) -- [`cockroach userfile get`](cockroach-userfile-get.html) -- [Use `userfile` for Bulk Operations](use-userfile-for-bulk-operations.html) -- [`cockroach` Commands Overview](cockroach-commands.html) -- [`IMPORT`](import.html) -- [`IMPORT INTO`](import-into.html) diff --git a/src/current/v22.1/cockroach-userfile-get.md b/src/current/v22.1/cockroach-userfile-get.md deleted file mode 100644 index 7ba88893c0e..00000000000 --- a/src/current/v22.1/cockroach-userfile-get.md +++ /dev/null @@ -1,88 +0,0 @@ ---- -title: cockroach userfile get -summary: Fetch files stored in the user-scoped file storage. -toc: true -docs_area: reference.cli ---- - - The `cockroach userfile get` [command](cockroach-commands.html) fetches the files stored in the [user-scoped file storage](use-userfile-for-bulk-operations.html) which match the provided pattern, using a SQL connection. If no pattern is provided, all files in the specified (or default, if unspecified) user-scoped file storage will be fetched. - -## Required privileges - -The user must have `CONNECT` [privileges](security-reference/authorization.html#managing-privileges) on the target database. - -A user can only fetch files from their own user-scoped storage, which is accessed through the [userfile URI](cockroach-userfile-upload.html#file-destination) used during the upload. CockroachDB will revoke all access from every other user in the cluster except users in the `admin` role and users explicitly granted access. - -{{site.data.alerts.callout_info}} -If this is your first interaction with user-scoped file storage, you may see an error indicating that you need `CREATE` privileges on the database. You must first [upload a file](cockroach-userfile-upload.html) or run a [`BACKUP`](backup.html) to `userfile` before attempting to `get` a file. -{{site.data.alerts.end}} - -## Synopsis - -Fetch a file: - -~~~ shell -$ cockroach userfile get [flags] -~~~ - -View help: - -~~~ shell -$ cockroach userfile get --help -~~~ - -## Flags - - Flag | Description ------------------+----------------------------------------------------- -`--cert-principal-map` | A comma-separated list of `:` mappings. This allows mapping the principal in a cert to a DB principal such as `node` or `root` or any SQL user. This is intended for use in situations where the certificate management system places restrictions on the `Subject.CommonName` or `SubjectAlternateName` fields in the certificate (e.g., disallowing a `CommonName` like `node` or `root`). If multiple mappings are provided for the same ``, the last one specified in the list takes precedence. A principal not specified in the map is passed through as-is via the identity function. A cert is allowed to authenticate a DB principal if the DB principal name is contained in the mapped `CommonName` or DNS-type `SubjectAlternateName` fields. -`--certs-dir` | The path to the [certificate directory](cockroach-cert.html) containing the CA and client certificates and client key.

**Env Variable:** `COCKROACH_CERTS_DIR`
**Default:** `${HOME}/.cockroach-certs/` -`--echo-sql` | Reveal the SQL statements sent implicitly by the command-line utility. -`--url` | A [connection URL](connection-parameters.html#connect-using-a-url) to use instead of the other arguments.

**Env Variable:** `COCKROACH_URL`
**Default:** no URL -`--user`
`-u` | The [SQL user](create-user.html) that will own the client session.

**Env Variable:** `COCKROACH_USER`
**Default:** `root` - -## Examples - -### Get a specific file - -To get the file named test-data.csv from the default user-scoped storage location for the current user: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach userfile get test-data.csv --certs-dir=certs -~~~ - -### Get a file saved to an explicit local file name - -To get a file named test-data.csv from a local directory: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach userfile get test-data.csv /Users/maxroach/Desktop/test-data.csv --certs-dir=certs -~~~ - -### Get a file from a non-default userfile URI - -If you [uploaded a file to a non-default userfile URI](cockroach-userfile-upload.html#upload-a-file-to-a-non-default-userfile-uri) (e.g., `userfile://testdb.public.uploads`), use the same URI to fetch it: - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach userfile get userfile://testdb.public.uploads/test-data.csv --certs-dir=certs -~~~ - -### Get files that match the provided pattern - -To get all files that match a pattern, use *: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach userfile get '*.csv' --certs-dir=certs -~~~ - -## See also - -- [`cockroach userfile upload`](cockroach-userfile-upload.html) -- [`cockroach userfile delete`](cockroach-userfile-delete.html) -- [`cockroach userfile list`](cockroach-userfile-list.html) -- [Use `userfile` for Bulk Operations](use-userfile-for-bulk-operations.html) -- [`cockroach` Commands Overview](cockroach-commands.html) diff --git a/src/current/v22.1/cockroach-userfile-list.md b/src/current/v22.1/cockroach-userfile-list.md deleted file mode 100644 index 34ca97902b4..00000000000 --- a/src/current/v22.1/cockroach-userfile-list.md +++ /dev/null @@ -1,108 +0,0 @@ ---- -title: cockroach userfile list -summary: List the files stored in the user-scoped file storage. -toc: true -docs_area: reference.cli ---- - - The `cockroach userfile list` [command](cockroach-commands.html) lists the files stored in the [user-scoped file storage](use-userfile-for-bulk-operations.html) which match the [provided pattern](cockroach-userfile-upload.html#file-destination), using a SQL connection. If no pattern is provided, all files in the specified (or default, if unspecified) user scoped file storage will be listed. - -## Required privileges - -The user must have the `CREATE` [privilege](security-reference/authorization.html#managing-privileges) on the target database. CockroachDB will proactively grant the user `GRANT`, `SELECT`, `INSERT`, `DROP`, `DELETE` on the metadata and file tables. - -A user can only view files in their own user-scoped storage, which is accessed through the [userfile URI](cockroach-userfile-upload.html#file-destination) used during the upload. CockroachDB will revoke all access from every other user in the cluster except users in the `admin` role. - -## Synopsis - -View files: - -~~~ shell -$ cockroach userfile list [flags] -~~~ - -View help: - -~~~ shell -$ cockroach userfile list --help -~~~ - -## Flags - - Flag | Description ------------------+----------------------------------------------------- -`--cert-principal-map` | A comma-separated list of `:` mappings. This allows mapping the principal in a cert to a DB principal such as `node` or `root` or any SQL user. This is intended for use in situations where the certificate management system places restrictions on the `Subject.CommonName` or `SubjectAlternateName` fields in the certificate (e.g., disallowing a `CommonName` like `node` or `root`). If multiple mappings are provided for the same ``, the last one specified in the list takes precedence. A principal not specified in the map is passed through as-is via the identity function. A cert is allowed to authenticate a DB principal if the DB principal name is contained in the mapped `CommonName` or DNS-type `SubjectAlternateName` fields. -`--certs-dir` | The path to the [certificate directory](cockroach-cert.html) containing the CA and client certificates and client key.

**Env Variable:** `COCKROACH_CERTS_DIR`
**Default:** `${HOME}/.cockroach-certs/` -`--echo-sql` | Reveal the SQL statements sent implicitly by the command-line utility. -`--url` | A [connection URL](connection-parameters.html#connect-using-a-url) to use instead of the other arguments.

**Env Variable:** `COCKROACH_URL`
**Default:** no URL -`--user`
`-u` | The [SQL user](create-user.html) that will own the client session.

**Env Variable:** `COCKROACH_USER`
**Default:** `root` - -## Examples - -### List all files in the default storage - -If the file or directory is not specified, all files in the default user-scoped storage (`userfile://defaultdb.public.userfiles_$user/`) will be listed: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach userfile list --certs-dir=certs -~~~ - -~~~ -userfile://defaultdb.public.userfiles_root/test-data-2.csv -userfile://defaultdb.public.userfiles_root/test-data.csv -userfile://defaultdb.public.userfiles_root/test-upload/test-data.csv -~~~ - -### List a specific file - -To list all files in a specified directory: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach userfile list test-data.csv --certs-dir=certs -~~~ - -~~~ -userfile://defaultdb.public.userfiles_root/test-data.csv -~~~ - -### List files that match the provided pattern - -To list all files that match a pattern, use `*`: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach userfile list '*.csv' --certs-dir=certs -~~~ - -~~~ -userfile://defaultdb.public.userfiles_root/test-data-2.csv -userfile://defaultdb.public.userfiles_root/test-data.csv -~~~ - -### List files from a non-default userfile URI - -If you [uploaded a file to a non-default userfile URI](cockroach-userfile-upload.html#upload-a-file-to-a-non-default-userfile-uri) (e.g., `userfile://testdb.public.uploads`): - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach userfile upload /Users/maxroach/Desktop/test-data.csv userfile://testdb.public.uploads/test-data.csv -~~~ - -Use the same URI to view it: - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach userfile list userfile://testdb.public.uploads -~~~ - -## See also - -- [`cockroach userfile upload`](cockroach-userfile-upload.html) -- [`cockroach userfile delete`](cockroach-userfile-delete.html) -- [`cockroach userfile get`](cockroach-userfile-get.html) -- [Use `userfile` for Bulk Operations](use-userfile-for-bulk-operations.html) -- [`cockroach` Commands Overview](cockroach-commands.html) -- [`IMPORT`](import.html) -- [`IMPORT INTO`](import-into.html) diff --git a/src/current/v22.1/cockroach-userfile-upload.md b/src/current/v22.1/cockroach-userfile-upload.md deleted file mode 100644 index 0b3d8980609..00000000000 --- a/src/current/v22.1/cockroach-userfile-upload.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -title: cockroach userfile upload -summary: The cockroach userfile upload command uploads a file to user-scoped file storage. -toc: true -docs_area: reference.cli ---- - - The `cockroach userfile upload` [command](cockroach-commands.html) uploads a file to the [user-scoped file storage](use-userfile-for-bulk-operations.html) using a SQL connection. - -This command takes in a source file to upload and a destination filename. It will then use a SQL connection to upload the file to the [destination](#file-destination). - -{{site.data.alerts.callout_info}} -A userfile uses storage space in the cluster, and is replicated with the rest of the cluster's data. We recommended using `cockroach userfile upload` for quick uploads from your client (about 15MB or smaller). -{{site.data.alerts.end}} - -{{site.data.alerts.callout_success}} -If you would like to upload and import data from a dump file, consider using [`cockroach import`](cockroach-import.html) instead. -{{site.data.alerts.end}} - -## Required privileges - -The user must have the `CREATE` [privilege](security-reference/authorization.html#managing-privileges) on the target database. CockroachDB will proactively grant the user `GRANT`, `SELECT`, `INSERT`, `DROP`, `DELETE` on the metadata and file tables. - -A user can only upload files to their own user-scoped storage, which is accessed through the [userfile URI](#file-destination). CockroachDB will revoke all access from every other user in the cluster except users in the `admin` role. - -## Synopsis - -Upload a file: - -~~~ shell -$ cockroach userfile upload [flags] -~~~ - -Upload a directory recursively: - -~~~ shell -cockroach userfile upload --recursive [flags] -~~~ - -{{site.data.alerts.callout_info}} -You must specify a source path. -{{site.data.alerts.end}} - -View help: - -~~~ shell -$ cockroach userfile upload --help -~~~ - -## File destination - -Userfile operations are backed by two tables: `files` (which holds file metadata) and `payload` (which holds the file payloads). To reference these tables, you can: - -- Use the default URI: `userfile://defaultdb.public.userfiles_$user/`. -- Provide a fully qualified userfile URI that specifies the database, schema, and table name prefix you want to use. - - - If you do not specify a destination URI/path, then CockroachDB will use the default URI scheme and host, and the basename from the source argument as the path. For example: `userfile://defaultdb.public.userfiles_root/local` - - If the destination is a well-formed userfile URI (i.e., `userfile://db.schema.tablename_prefix/path/to/file`), then CockroachDB will use that as the final URI. For example: `userfile://foo.bar.baz_root/destination/path` - - If destination is not a well-formed userfile URI, then CockroachDB will use the default userfile URI schema and host (`userfile://defaultdb.public.userfiles_$user/`), and the destination as the path. For example: `userfile://defaultdb.public.userfiles_root/destination/path` - - - -{{site.data.alerts.callout_danger}} -Userfile is **not** a filesystem and does not support filesystem semantics. The destination file path must be the same after normalization (i.e., if you pass any path that results in a different path after normalization, it will be rejected). -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -Files are uploaded with a `.tmp` suffix and are renamed once the userfile upload transaction has committed (i.e, the process ends gracefully). Therefore, if a file you believed had finished uploading has a `.tmp` suffix, then the upload should be retried. -{{site.data.alerts.end}} - -## Flags - - Flag | Description ------------------+----------------------------------------------------- -`--cert-principal-map` | A comma-separated list of `:` mappings. This allows mapping the principal in a cert to a DB principal such as `node` or `root` or any SQL user. This is intended for use in situations where the certificate management system places restrictions on the `Subject.CommonName` or `SubjectAlternateName` fields in the certificate (e.g., disallowing a `CommonName` like `node` or `root`). If multiple mappings are provided for the same ``, the last one specified in the list takes precedence. A principal not specified in the map is passed through as-is via the identity function. A cert is allowed to authenticate a DB principal if the DB principal name is contained in the mapped `CommonName` or DNS-type `SubjectAlternateName` fields. -`--certs-dir` | The path to the [certificate directory](cockroach-cert.html) containing the CA and client certificates and client key.

**Env Variable:** `COCKROACH_CERTS_DIR`
**Default:** `${HOME}/.cockroach-certs/` -`--echo-sql` | Reveal the SQL statements sent implicitly by the command-line utility. -`--url` | A [connection URL](connection-parameters.html#connect-using-a-url) to use instead of the other arguments.

**Env Variable:** `COCKROACH_URL`
**Default:** no URL -`--user`
`-u` | The [SQL user](create-user.html) that will own the client session.

**Env Variable:** `COCKROACH_USER`
**Default:** `root` -`--recursive`
`-r` | Upload a directory and its contents rooted at a specified directory recursively to user-scoped file storage. For example: `cockroach userfile upload -r `

See [File Destination](#file-destination) for detail on forming the destination URI and this [usage example](#upload-a-directory-recursively) for working with the `--recursive` flag. - -## Examples - -### Upload a file - -To upload a file to the default storage (`userfile://defaultdb.public.userfiles_$user/`): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach userfile upload /Users/maxroach/Desktop/test-data.csv /test-data.csv --certs-dir=certs -~~~ - -~~~ -successfully uploaded to userfile://defaultdb.public.userfiles_root/test-data.csv -~~~ - -Also, a file can be uploaded to the default storage if the destination is not specified: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach userfile upload /Users/maxroach/Desktop/test-data2.csv --certs-dir=certs -~~~ - -~~~ -successfully uploaded to userfile://defaultdb.public.userfiles_root/test-data2.csv -~~~ - -Then, you can use the file to [`IMPORT`](import.html) or [`IMPORT INTO`](import-into.html) data. - -### Upload a file to a specific directory - -To upload a file to a specific destination, include the destination in the command: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach userfile upload /Users/maxroach/Desktop/test-data.csv /test-upload/test-data.csv --cert-dir=certs -~~~ - -~~~ -successfully uploaded to userfile://defaultdb.public.userfiles_root/test-upload/test-data.csv -~~~ - -Then, you can use the file to [`IMPORT`](import.html) or [`IMPORT INTO`](import-into.html) data. - -### Upload a directory recursively - - To upload the contents of a directory to userfile storage, specify a source directory and destination. For example, to upload a [backup](backup.html) directory to userfile storage: - -~~~ shell -cockroach userfile upload -r /Users/maxroach/movr-backup userfile:///backup-data --certs-dir=certs -~~~ - -~~~ -uploading: BACKUP-CHECKPOINT-698053706999726081-CHECKSUM -successfully uploaded to userfile://defaultdb.public.userfiles_root/backup-data/movr-backup/BACKUP-CHECKPOINT-698053706999726081-CHECKSUM -uploading: BACKUP-CHECKPOINT-CHECKSUM -successfully uploaded to userfile://defaultdb.public.userfiles_root/backup-data/movr-backup/BACKUP-CHECKPOINT-CHECKSUM -uploading: BACKUP-STATISTICS -successfully uploaded to userfile://defaultdb.public.userfiles_root/backup-data/movr-backup/BACKUP-STATISTICS -uploading: BACKUP_MANIFEST -successfully uploaded to userfile://defaultdb.public.userfiles_root/backup-data/movr-backup/BACKUP_MANIFEST -uploading: BACKUP_MANIFEST-CHECKSUM -successfully uploaded to userfile://defaultdb.public.userfiles_root/backup-data/movr-backup/BACKUP_MANIFEST-CHECKSUM -uploading: data/698053715875692545.sst -successfully uploaded to userfile://defaultdb.public.userfiles_root/backup-data/movr-backup/data/698053715875692545.sst -uploading: data/698053717178744833.sst -successfully uploaded to userfile://defaultdb.public.userfiles_root/backup-data/movr-backup/data/698053717178744833.sst -. . . -~~~ - -When the source directory does not have a trailing slash, the last element of the source path will be appended to the destination path. In this example the source path `/Users/maxroach/movr-backup` does not have a trailing slash, as a result `movr-backup` appends to the destination path—originally `userfile:///backup-data`—to become `userfile://defaultdb.public.userfiles_root/backup-data/movr-backup/`. - -It is important to note that userfile is **not** a filesystem and does not support filesystem semantics. The destination file path must be the same after normalization (i.e., if you pass any path that results in a different path after normalization, it will be rejected). - -See the [file destination](#file-destination) section for more detail on forming userfile URIs. - -### Upload a file to a non-default userfile URI - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach userfile upload /Users/maxroach/Desktop/test-data.csv userfile://testdb.public.uploads/test-data.csv -~~~ - -~~~ -successfully uploaded to userfile://testdb.public.uploads/test-data.csv -~~~ - -## See also - -- [`cockroach userfile list`](cockroach-userfile-list.html) -- [`cockroach userfile delete`](cockroach-userfile-delete.html) -- [`cockroach userfile get`](cockroach-userfile-get.html) -- [Use `userfile` for Bulk Operations](use-userfile-for-bulk-operations.html) -- [`cockroach` Commands Overview](cockroach-commands.html) -- [`IMPORT`](import.html) -- [`IMPORT INTO`](import-into.html) diff --git a/src/current/v22.1/cockroach-version.md b/src/current/v22.1/cockroach-version.md deleted file mode 100644 index 29da0b67501..00000000000 --- a/src/current/v22.1/cockroach-version.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: cockroach version -summary: To view version details for a specific cockroach binary, run the cockroach version command. -toc: true -key: view-version-details.html -docs_area: reference.cli ---- - -To view version details for a specific `cockroach` binary, run the `cockroach version` [command](cockroach-commands.html), or run `cockroach --version`: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach version -~~~ - -~~~ -Build Tag: {{page.release_info.version}} -Build Time: {{page.release_info.build_time}} -Distribution: CCL -Platform: darwin amd64 (x86_64-apple-darwin19) -Go Version: go1.15.11 -C Compiler: Clang 10.0.0 -Build Commit ID: ac916850f403f083ea62e2b0dfdfecbbeaaa4d05 -Build Type: release -~~~ - -## Response - -The `cockroach version` command outputs the following fields: - -Field | Description -------|------------ -`Build Tag` | The CockroachDB version.

To return just the build tag, use `cockroach version --build-tag`. -`Build Time` | The date and time when the binary was built. -`Distribution` | The scope of the binary. If `CCL`, the binary contains functionality covered by both the CockroachDB Community License (CCL) and the Business Source License (BSL). If `OSS`, the binary contains only functionality covered by the Apache 2.0 license. The v19.2 release converts to Apache 2.0 as of Oct 1, 2022, at which time you can use the `make buildoss` command to build a pure open-source binary. For more details about licensing, see the [Licensing FAQs](licensing-faqs.html). -`Platform` | The platform that the binary can run on. -`Go Version` | The version of Go in which the source code is written. -`C Compiler` | The C compiler used to build the binary. -`Build Commit ID` | The SHA-1 hash of the commit used to build the binary. -`Build Type` | The type of release. If `release`, `release-gnu`, or `release-musl`, the binary is for a production [release](../releases/). If `development`, the binary is for a testing release. - -## See also - -- [Install CockroachDB](install-cockroachdb.html) -- [`cockroach` Commands Overview](cockroach-commands.html) diff --git a/src/current/v22.1/cockroach-workload.md b/src/current/v22.1/cockroach-workload.md deleted file mode 100644 index 104153deab6..00000000000 --- a/src/current/v22.1/cockroach-workload.md +++ /dev/null @@ -1,665 +0,0 @@ ---- -title: cockroach workload -summary: Use cockroach workload to run a load generator against a CockroachDB cluster. -toc: true -docs_area: reference.cli ---- - -CockroachDB comes with built-in load generators for simulating different types of client workloads, printing per-operation statistics and totals after a specific duration or max number of operations. To run one of these load generators, use the `cockroach workload` [command](cockroach-commands.html) as described below. - -{{site.data.alerts.callout_danger}} -The `cockroach workload` command is experimental. The interface and output are subject to change. -{{site.data.alerts.end}} - -## Synopsis - -Create the schema for a workload: - -~~~ shell -$ cockroach workload init '' -~~~ - -Run a workload: - -~~~ shell -$ cockroach workload run '' -~~~ - -View help: - -~~~ shell -$ cockroach workload --help -~~~ -~~~ shell -$ cockroach workload init --help -~~~ -~~~ shell -$ cockroach workload init --help -~~~ -~~~ shell -$ cockroach workload run --help -~~~ -~~~ shell -$ cockroach workload run --help -~~~ - -## Subcommands - -Command | Usage ---------|------ -`init` | Load the schema for the workload. You run this command once for a given schema. -`run` | Run a workload. You can run this command multiple times from different machines to increase concurrency. See [Concurrency](#concurrency) for more details. - -## Concurrency - -There are two ways to increase the concurrency of a workload: - -- **Increase the concurrency of a single workload instance** by running `cockroach workload run ` with the `--concurrency` flag set to a value higher than the default. Note that not all workloads support this flag. -- **Run multiple instances of a workload in parallel** by running `cockroach workload run ` multiple times from different terminals/machines. - -## Workloads - -Workload | Description ----------|------------ -[`bank`](#bank-workload) | Models a set of accounts with currency balances.

For this workload, you run `workload init` to load the schema and then `workload run` to generate data. -[`intro`](#intro-and-startrek-workloads) | Loads an `intro` database, with one table, `mytable`, with a hidden message.

For this workload, you run only `workload init` to load the data. The `workload run` subcommand is not applicable. -[`kv`](#kv-workload) | Reads and writes to keys spread (by default, uniformly at random) across the cluster.

For this workload, you run `workload init` to load the schema and then `workload run` to generate data. -[`movr`](#movr-workload) | Simulates a workload for the [MovR example application](movr.html).

For this workload, you run `workload init` to load the schema and then `workload run` to generate data. -[`startrek`](#intro-and-startrek-workloads) | Loads a `startrek` database, with two tables, `episodes` and `quotes`.

For this workload, you run only `workload init` to load the data. The `workload run` subcommand is not applicable. -[`tpcc`](#tpcc-workload) | Simulates a transaction processing workload using a rich schema of multiple tables.

For this workload, you run `workload init` to load the schema and then `workload run` to generate data. -[`ycsb`](#ycsb-workload) | Simulates a high-scale key value workload, either read-heavy, write-heavy, or scan-based, with additional customizations.

For this workload, you run `workload init` to load the schema and then `workload run` to generate data. - -{{site.data.alerts.callout_info}} - `cockroach workload` sets the [`application_name`](set-vars.html#supported-variables) for its workload queries to the name of the workload that is used. You can filter queries on `application_name` on the [Statements page of the DB Console](ui-statements-page.html#search-and-filter), or in a [`SHOW STATEMENTS`](show-statements.html#filter-for-specific-queries) statement. -{{site.data.alerts.end}} - -## Flags - -{{site.data.alerts.callout_info}} -The `cockroach workload` command does not support connection or security flags like other [`cockroach` commands](cockroach-commands.html). Instead, you must use a [connection string](connection-parameters.html) at the end of the command. -{{site.data.alerts.end}} - -### `bank` workload - -Flag | Description ------|------------ -`--concurrency` | The number of concurrent workers.

**Applicable commands:** `init` or `run`
**Default:** 2 * number of CPUs -`--db` | The SQL database to use.

**Applicable commands:** `init` or `run`
**Default:** `bank` -`--display-every` | The frequency for printing per-operation statistics. Valid [time units](https://en.wikipedia.org/wiki/Orders_of_magnitude_(time)) are `ns`, `us`, `ms`, `s`, `m`, and `h`.

**Applicable command:** `run`
**Default:** `1s` -`--display-format` | The format for printing per-operation statistics (`simple`, `incremental-json`). When using `incremental-json`, note that totals are not printed at the end of the workload's duration.

**Applicable command:** `run`
**Default:** `simple` -`--drop` | Drop the existing database, if it exists.

**Applicable commands:** `init` or `run`. For the `run` command, this flag must be used in conjunction with `--init`. -`--duration` | The duration to run, with a required time unit suffix. Valid [time units](https://en.wikipedia.org/wiki/Orders_of_magnitude_(time)) are `ns`, `us`, `ms`, `s`, `m`, and `h`.

**Applicable commands:** `init` or `run`
**Default:** `0`, which means run forever. -`--histograms` | The file to write per-op incremental and cumulative histogram data to.

**Applicable command:** `run` -`--init` | **Deprecated.** Use the `init` command instead.

**Applicable command:** `run` -`--max-ops` | The maximum number of operations to run.

**Applicable command:** `run` -`--max-rate` | The maximum frequency of operations (reads/writes).

**Applicable command:** `run`
**Default:** `0`, which means unlimited. -`--payload-bytes` | The size of the payload field in each initial row.

**Applicable commands:** `init` or `run`
**Default:** `100` -`--ramp` | The duration over which to ramp up load.

**Applicable command:** `run` -`--ranges` | The initial number of ranges in the `bank` table.

**Applicable commands:** `init` or `run`
**Default:** `10` -`--rows` | The initial number of accounts in the `bank` table.

**Applicable commands:** `init` or `run`
**Default:** `1000` -`--seed` | The key hash seed.

**Applicable commands:** `init` or `run`
**Default:** `1` -`--tolerate-errors` | Keep running on error.

**Applicable command:** `run` - -### `intro` and `startrek` workloads - -{{site.data.alerts.callout_info}} -These workloads generate data but do not offer the ability to run continuous load. Thus, only the `init` subcommand is supported. -{{site.data.alerts.end}} - -Flag | Description ------|------------ -`--drop` | Drop the existing database, if it exists, before loading the dataset. - - -### `kv` workload - -Flag | Description ------|------------ -`--batch` | The number of blocks to insert in a single SQL statement.

**Applicable commands:** `init` or `run`
**Default:** `1` -`--concurrency` | The number of concurrent workers.

**Applicable commands:** `init` or `run`
**Default:** `8` `--cycle-length`| The number of keys repeatedly accessed by each writer.**Applicable commands:** `init` or `run`
**Default:** `9223372036854775807` -`--db` | The SQL database to use.

**Applicable commands:** `init` or `run`
**Default:** `kv` -`--display-every` | The frequency for printing per-operation statistics. Valid [time units](https://en.wikipedia.org/wiki/Orders_of_magnitude_(time)) are `ns`, `us`, `ms`, `s`, `m`, and `h`.

**Applicable command:** `run`
**Default:** `1s` -`--display-format` | The format for printing per-operation statistics (`simple`, `incremental-json`). When using `incremental-json`, note that totals are not printed at the end of the workload's duration.

**Applicable command:** `run`
**Default:** `simple` -`--drop` | Drop the existing database, if it exists.

**Applicable commands:** `init` or `run` -`--duration` | The duration to run, with a required time unit suffix. Valid [time units](https://en.wikipedia.org/wiki/Orders_of_magnitude_(time)) are `ns`, `us`, `ms`, `s`, `m`, and `h`.

**Applicable command:** `run`
**Default:** `0`, which means run forever. -`--histograms` | The file to write per-op incremental and cumulative histogram data to.

**Applicable command:** `run` -`--init` | **Deprecated.** Use the `init` command instead.

**Applicable command:** `run` -`--max-block-bytes` | The maximum amount of raw data written with each insertion.

**Applicable commands:** `init` or `run`
**Default:** `2` -`--max-ops` | The maximum number of operations to run.

**Applicable command:** `run` -`--max-rate` | The maximum frequency of operations (reads/writes).

**Applicable command:** `run`
**Default:** `0`, which means unlimited. -`--min-block-bytes` | The minimum amount of raw data written with each insertion.

**Applicable commands:** `init` or `run`
**Default:** `1` -`--ramp` | The duration over which to ramp up load.

**Applicable command:** `run` -`--read-percent` | The percent (0-100) of operations that are reads of existing keys.

**Applicable commands:** `init` or `run` -`--seed` | The key hash seed.

**Applicable commands:** `init` or `run`
**Default:** `1` -`--sequential` | Pick keys sequentially instead of randomly.

**Applicable commands:** `init` or `run` -`--splits` | The number of splits to perform before starting normal operations.

**Applicable commands:** `init` or `run` -`--tolerate-errors` | Keep running on error.

**Applicable command:** `run` -`--use-opt` | Use [cost-based optimizer](cost-based-optimizer.html).

**Applicable commands:** `init` or `run`
**Default:** `true` -`--write-seq` | Initial write sequence value.

**Applicable commands:** `init` or `run` - -### `movr` workload - -Flag | Description ------|------------ -`--data-loader` | How to load initial table data. Valid options are `INSERT` and `IMPORT`.

**Applicable commands:** `init` or `run`
**Default:** `INSERT` -`--db` | The SQL database to use.

**Applicable commands:** `init` or `run`
**Default:** `movr` -`--display-every` | The frequency for printing per-operation statistics. Valid [time units](https://en.wikipedia.org/wiki/Orders_of_magnitude_(time)) are `ns`, `us`, `ms`, `s`, `m`, and `h`.

**Applicable command:** `run`
**Default:** `1s` -`--display-format` | The format for printing per-operation statistics (`simple`, `incremental-json`). When using `incremental-json`, note that totals are not printed at the end of the workload's duration.

**Applicable command:** `run`
**Default:** `simple` -`--drop` | Drop the existing database, if it exists.

**Applicable commands:** `init` or `run` -`--duration` | The duration to run, with a required time unit suffix. Valid [time units](https://en.wikipedia.org/wiki/Orders_of_magnitude_(time)) are `ns`, `us`, `ms`, `s`, `m`, and `h`.

**Applicable command:** `run`
**Default:** `0`, which means run forever. -`--histograms` | The file to write per-op incremental and cumulative histogram data to.

**Applicable command:** `run` -`--histograms-max-latency` | Expected maximum latency of running a query, with a required time unit suffix. Valid [time units](https://en.wikipedia.org/wiki/Orders_of_magnitude_(time)) are `ns`, `us`, `ms`, `s`, `m`, and `h`.

**Applicable command:** `run`
**Default:** `1m40s` -`--max-ops` | The maximum number of operations to run.

**Applicable command:** `run` -`--max-rate` | The maximum frequency of operations (reads/writes).

**Applicable command:** `run`
**Default:** `0`, which means unlimited. -`--method` | The SQL issue method (`prepare`, `noprepare`, `simple`).

**Applicable commands:** `init` or `run`
**Default:** `prepare` -`--num-histories` | The initial number of ride location histories.

**Applicable commands:** `init` or `run`
**Default:** `1000` -`--num-promo-codes` | The initial number of promo codes.

**Applicable commands:** `init` or `run`
**Default:** `1000` -`--num-rides` | Initial number of rides.

**Applicable commands:** `init` or `run`
**Default:** `500` -`--num-users` | Initial number of users.

**Applicable commands:** `init` or `run`
**Default:** `50` -`--num-vehicles` | Initial number of vehicles.

**Applicable commands:** `init` or `run`
**Default:** `15` -`--ramp` | The duration over which to ramp up load.

**Applicable command:** `run` -`--seed` | The random number generator seed.

**Applicable commands:** `init` or `run`
**Default:** `1` -`--tolerate-errors` | Keep running on error.

**Applicable command:** `run` - -### `tpcc` workload - -Flag | Description ------|------------ -`--active-warehouses` | Run the load generator against a specific number of warehouses.

**Applicable commands:** `init` or `run`
**Defaults:** Value of `--warehouses` -`--concurrency` | The number of concurrent workers.

**Applicable commands:** `init` or `run`
**Default:** `16` -`--data-loader` | How to load initial table data. Valid options are `INSERT` and `IMPORT`.

**Applicable commands:** `init` or `run`
**Default:** `INSERT` -`--db` | The SQL database to use.

**Applicable commands:** `init` or `run`
**Default:** `tpcc` -`--display-every` | The frequency for printing per-operation statistics. Valid [time units](https://en.wikipedia.org/wiki/Orders_of_magnitude_(time)) are `ns`, `us`, `ms`, `s`, `m`, and `h`.

**Applicable command:** `run`
**Default:** `1s` -`--display-format` | The format for printing per-operation statistics (`simple`, `incremental-json`). When using `incremental-json`, note that totals are not printed at the end of the workload's duration.

**Applicable command:** `run`
**Default:** `simple` -`--drop` | Drop the existing database, if it exists.

**Applicable commands:** `init` or `run`. For the `run` command, this flag must be used in conjunction with `--init`. -`--duration` | The duration to run, with a required time unit suffix. Valid [time units](https://en.wikipedia.org/wiki/Orders_of_magnitude_(time)) are `ns`, `us`, `ms`, `s`, `m`, and `h`.

**Applicable command:** `run`
**Default:** `0`, which means run forever. -`--fks` | Add foreign keys.

**Applicable commands:** `init` or `run`
**Default:** `true` -`--histograms` | The file to write per-op incremental and cumulative histogram data to.

**Applicable command:** `run` -`--idle-conns` | Tests the TPCC workload with idle connections. -`--init` | **Deprecated.** Use the `init` command instead.

**Applicable command:** `run` -`--max-ops` | The maximum number of operations to run.

**Applicable command:** `run` -`--max-rate` | The maximum frequency of operations (reads/writes).

**Applicable command:** `run`
**Default:** `0`, which means unlimited. -`--mix` | Weights for the transaction mix.

**Applicable commands:** `init` or `run`
**Default:** `newOrder=10,payment=10,orderStatus=1,delivery=1,stockLevel=1`, which matches the [TPC-C specification](http://www.tpc.org/tpc_documents_current_versions/pdf/tpc-c_v5.11.0.pdf). -`--partition-affinity` | Run the load generator against a specific partition. This flag must be used in conjunction with `--partitions`.

**Applicable commands:** `init` or `run`
**Default:** `-1` -`--partitions` | Partition tables. This flag must be used in conjunction with `--split`.

**Applicable commands:** `init` or `run` -`--ramp` | The duration over which to ramp up load.

**Applicable command:** `run` -`--scatter` | Scatter ranges.

**Applicable commands:** `init` or `run` -`--seed` | The random number generator seed.

**Applicable commands:** `init` or `run`
**Default:** `1` -`--serializable` | Force serializable mode. CockroachDB only supports `SERIALIZABLE` isolation, so this flag is not necessary.

**Applicable command:** `init` -`--split` | [Split tables](split-at.html).

**Applicable commands:** `init` or `run` -`--tolerate-errors` | Keep running on error.

**Applicable command:** `run` -`--wait` | Run in wait mode, i.e., include think/keying sleeps.

**Applicable commands:** `init` or `run`
**Default:** `true` -`--warehouses` | The number of warehouses for loading initial data, at approximately 200 MB per warehouse.

**Applicable commands:** `init` or `run`
**Default:** `1` -`--workers` | The number of concurrent workers.

**Applicable commands:** `init` or `run`
**Default:** `--warehouses` * 10 -`--zones` | The number of [replication zones](configure-replication-zones.html) for partitioning. This number should match the number of `--partitions` and the zones used to start the cluster.

**Applicable command:** `init` - -### `ycsb` workload - -Flag | Description ------|------------ -`--concurrency` | The number of concurrent workers.

**Applicable commands:** `init` or `run`
**Default:** `8` -`--data-loader` | How to load initial table data. Valid options are `INSERT` and `IMPORT`.

**Applicable commands:** `init` or `run`
**Default:** `INSERT` -`--db` | The SQL database to use.

**Applicable commands:** `init` or `run`
**Default:** `ycsb` -`--display-every` | The frequency for printing per-operation statistics. Valid [time units](https://en.wikipedia.org/wiki/Orders_of_magnitude_(time)) are `ns`, `us`, `ms`, `s`, `m`, and `h`.

**Applicable command:** `run`
**Default:** `1s` -`--display-format` | The format for printing per-operation statistics (`simple`, `incremental-json`). When using `incremental-json`, note that totals are not printed at the end of the workload's duration.

**Applicable command:** `run`
**Default:** `simple` -`--drop` | Drop the existing database, if it exists.

**Applicable commands:** `init` or `run`. For the `run` command, this flag must be used in conjunction with `--init`. -`--duration` | The duration to run, with a required time unit suffix. Valid [time units](https://en.wikipedia.org/wiki/Orders_of_magnitude_(time)) are `ns`, `us`, `ms`, `s`, `m`, and `h`.

**Applicable command:** `run`
**Default:** `0`, which means run forever. -`--families` | Place each column in its own [column family](column-families.html).

**Applicable commands:** `init` or `run` -`--histograms` | The file to write per-op incremental and cumulative histogram data to.

**Applicable command:** `run` -`--init` | **Deprecated.** Use the `init` command instead.

**Applicable command:** `run` -`--insert-count` | Number of rows to sequentially insert before beginning random number generation.

**Applicable commands:** `init` or `run`
**Default:** `10000` -`--json` | Use JSONB rather than relational data.

**Applicable commands:** `init` or `run` -`--max-ops` | The maximum number of operations to run.

**Applicable command:** `run` -`--max-rate` | The maximum frequency of operations (reads/writes).

**Applicable command:** `run`
**Default:** `0`, which means unlimited. -`--method` | The SQL issue method (`prepare`, `noprepare`, `simple`).

**Applicable commands:** `init` or `run`
**Default:** `prepare` -`--ramp` | The duration over which to ramp up load.

**Applicable command:** `run` -`--request-distribution` | Distribution for the random number generator (`zipfian`, `uniform`).

**Applicable commands:** `init` or `run`.
**Default:** `zipfian` -`--seed` | The random number generator seed.

**Applicable commands:** `init` or `run`
**Default:** `1` -`--splits` | Number of [splits](split-at.html) to perform before starting normal operations.

**Applicable commands:** `init` or `run` -`--tolerate-errors` | Keep running on error.

**Applicable command:** `run` -`--workload` | The type of workload to run (`A`, `B`, `C`, `D`, or `F`). For details about these workloads, see [YCSB Workloads](https://github.com/brianfrankcooper/YCSB/wiki/Core-Workloads).

**Applicable commands:** `init` or `run`
**Default:** `B` - -### Logging - -By default, the `cockroach workload` command logs messages to `stderr`. This includes events with `INFO` [severity](logging.html#logging-levels-severities) and higher. - -If you need to troubleshoot this command's behavior, you can [customize its logging behavior](configure-logs.html). - -## Examples - -These examples assume that you have already [started an insecure cluster locally](start-a-local-cluster.html): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---listen-addr=localhost -~~~ - -### Run the `bank` workload - -1. Load the initial schema: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload init bank \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - -2. Run the workload for 1 minute: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload run bank \ - --duration=1m \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - - You'll see per-operation statistics print to standard output every second: - - ~~~ - _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms) - 1s 0 1608.6 1702.2 4.5 7.3 12.6 65.0 transfer - 2s 0 1725.3 1713.8 4.5 7.9 13.1 19.9 transfer - 3s 0 1721.1 1716.2 4.5 7.3 11.5 21.0 transfer - 4s 0 1328.7 1619.2 5.5 10.5 17.8 39.8 transfer - 5s 0 1389.3 1573.3 5.2 11.5 16.3 23.1 transfer - 6s 0 1640.0 1584.4 5.0 7.9 12.1 16.3 transfer - 7s 0 1594.0 1585.8 5.0 7.9 10.5 15.7 transfer - 8s 0 1652.8 1594.2 4.7 7.9 11.5 29.4 transfer - 9s 0 1451.9 1578.4 5.2 10.0 15.2 26.2 transfer - 10s 0 1653.3 1585.9 5.0 7.6 10.0 18.9 transfer - ... - ~~~ - - After the specified duration (1 minute in this case), the workload will stop and you'll see totals printed to standard output: - - ~~~ - _elapsed___errors_____ops(total)___ops/sec(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)__result - 60.0s 0 84457 1407.6 5.7 5.5 10.0 15.2 167.8 - ~~~ - -### Run the `kv` workload - -1. Load the initial schema: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload init kv \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - -2. Run the workload for 1 minute: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload run kv \ - --duration=1m \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - - You'll see per-operation statistics print to standard output every second: - - ~~~ - _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms) - 1s 0 5095.8 5123.7 1.5 2.5 3.3 7.3 write - 2s 0 4795.4 4959.6 1.6 2.8 3.5 8.9 write - 3s 0 3456.5 4458.5 2.0 4.5 7.3 24.1 write - 4s 0 2787.9 4040.8 2.4 6.3 12.6 30.4 write - 5s 0 3558.7 3944.4 2.0 4.2 6.8 11.5 write - 6s 0 3733.8 3909.3 1.9 4.2 6.0 12.6 write - 7s 0 3565.6 3860.1 2.0 4.7 7.9 25.2 write - 8s 0 3469.3 3811.4 2.0 5.0 6.8 22.0 write - 9s 0 3937.6 3825.4 1.8 3.7 7.3 29.4 write - 10s 0 3822.9 3825.1 1.8 4.7 8.9 37.7 write - ... - ~~~ - - After the specified duration (1 minute in this case), the workload will stop and you'll see totals printed to standard output: - - ~~~ - _elapsed___errors_____ops(total)___ops/sec(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)__result - 60.0s 0 276067 4601.0 1.7 1.6 3.1 5.2 96.5 - ~~~ - -### Load the `intro` dataset - -1. Load the dataset: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload init intro \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - -2. Launch the built-in SQL client to view it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW TABLES FROM intro; - ~~~ - - ~~~ - table_name - +------------+ - mytable - (1 row) - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ SELECT * FROM intro.mytable WHERE (l % 2) = 0; - ~~~ - - ~~~ - l | v - +----+------------------------------------------------------+ - 0 | !__aaawwmqmqmwwwaas,,_ .__aaawwwmqmqmwwaaa,, - 2 | !"VT?!"""^~~^"""??T$Wmqaa,_auqmWBT?!"""^~~^^""??YV^ - 4 | ! "?##mW##?"- - 6 | ! C O N G R A T S _am#Z??A#ma, Y - 8 | ! _ummY" "9#ma, A - 10 | ! vm#Z( )Xmms Y - 12 | ! .j####mmm#####mm#m##6. - 14 | ! W O W ! jmm###mm######m#mmm##6 - 16 | ! ]#me*Xm#m#mm##m#m##SX##c - 18 | ! dm#||+*$##m#mm#m#Svvn##m - 20 | ! :mmE=|+||S##m##m#1nvnnX##; A - 22 | ! :m#h+|+++=Xmm#m#1nvnnvdmm; M - 24 | ! Y $#m>+|+|||##m#1nvnnnnmm# A - 26 | ! O ]##z+|+|+|3#mEnnnnvnd##f Z - 28 | ! U D 4##c|+|+|]m#kvnvnno##P E - 30 | ! I 4#ma+|++]mmhvnnvq##P` ! - 32 | ! D I ?$#q%+|dmmmvnnm##! - 34 | ! T -4##wu#mm#pw##7' - 36 | ! -?$##m####Y' - 38 | ! !! "Y##Y"- - 40 | ! - (21 rows) - ~~~ - -### Load the `startrek` dataset - -1. Load the dataset: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload init startrek \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - -2. Launch the built-in SQL client to view it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW TABLES FROM startrek; - ~~~ - - ~~~ - table_name - +------------+ - episodes - quotes - (2 rows) - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT * FROM startrek.episodes WHERE stardate > 5500; - ~~~ - - ~~~ - id | season | num | title | stardate - +----+--------+-----+-----------------------------------+----------+ - 60 | 3 | 5 | Is There in Truth No Beauty? | 5630.7 - 62 | 3 | 7 | Day of the Dove | 5630.3 - 64 | 3 | 9 | The Tholian Web | 5693.2 - 65 | 3 | 10 | Plato's Stepchildren | 5784.2 - 66 | 3 | 11 | Wink of an Eye | 5710.5 - 69 | 3 | 14 | Whom Gods Destroy | 5718.3 - 70 | 3 | 15 | Let That Be Your Last Battlefield | 5730.2 - 73 | 3 | 18 | The Lights of Zetar | 5725.3 - 74 | 3 | 19 | Requiem for Methuselah | 5843.7 - 75 | 3 | 20 | The Way to Eden | 5832.3 - 76 | 3 | 21 | The Cloud Minders | 5818.4 - 77 | 3 | 22 | The Savage Curtain | 5906.4 - 78 | 3 | 23 | All Our Yesterdays | 5943.7 - 79 | 3 | 24 | Turnabout Intruder | 5928.5 - (14 rows) - ~~~ - -### Load the `movr` dataset - -1. Load the dataset: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload init movr \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - -2. Launch the built-in SQL client to view it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW TABLES FROM movr; - ~~~ - - ~~~ - table_name -+----------------------------+ - promo_codes - rides - user_promo_codes - users - vehicle_location_histories - vehicles -(6 rows) - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT * FROM movr.users WHERE city='new york'; - ~~~ - - ~~~ - id | city | name | address | credit_card -+--------------------------------------+----------+------------------+-----------------------------+-------------+ - 00000000-0000-4000-8000-000000000000 | new york | Robert Murphy | 99176 Anderson Mills | 8885705228 - 051eb851-eb85-4ec0-8000-000000000001 | new york | James Hamilton | 73488 Sydney Ports Suite 57 | 8340905892 - 0a3d70a3-d70a-4d80-8000-000000000002 | new york | Judy White | 18580 Rosario Ville Apt. 61 | 2597958636 - 0f5c28f5-c28f-4c00-8000-000000000003 | new york | Devin Jordan | 81127 Angela Ferry Apt. 8 | 5614075234 - 147ae147-ae14-4b00-8000-000000000004 | new york | Catherine Nelson | 1149 Lee Alley | 0792553487 -(5 rows) - ~~~ - -### Run the `movr` workload - -1. Load the initial schema: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload init movr \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - -2. Initialize and run the workload for 1 minute: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload run movr \ - --duration=1m \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - - You'll see per-operation statistics print to standard output every second: - - ~~~ - _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms) - 1.0s 0 31.9 32.0 0.5 0.6 1.4 1.4 addUser - 1.0s 0 6.0 6.0 1.2 1.4 1.4 1.4 addVehicle - 1.0s 0 10.0 10.0 2.2 6.3 6.3 6.3 applyPromoCode - 1.0s 0 2.0 2.0 0.5 0.6 0.6 0.6 createPromoCode - 1.0s 0 9.0 9.0 0.9 1.6 1.6 1.6 endRide - 1.0s 0 1407.5 1407.8 0.3 0.5 0.7 4.1 readVehicles - 1.0s 0 27.0 27.0 2.1 3.1 4.7 4.7 startRide - 1.0s 0 86.8 86.9 4.7 8.4 11.5 15.2 updateActiveRides - 2.0s 0 26.0 29.0 0.5 1.1 1.4 1.4 addUser - 2.0s 0 8.0 7.0 1.2 2.8 2.8 2.8 addVehicle - 2.0s 0 2.0 6.0 2.6 2.8 2.8 2.8 applyPromoCode - 2.0s 0 0.0 1.0 0.0 0.0 0.0 0.0 createPromoCode - 2.0s 0 6.0 7.5 0.8 1.7 1.7 1.7 endRide - 2.0s 0 1450.4 1429.1 0.3 0.6 0.9 2.6 readVehicles - 2.0s 0 17.0 22.0 2.1 3.3 5.5 5.5 startRide - 2.0s 0 59.0 72.9 6.3 11.5 11.5 14.2 updateActiveRides - ... - ~~~ - - After the specified duration (1 minute in this case), the workload will stop and you'll see totals printed to standard output: - - ~~~ - _elapsed___errors_____ops(total)___ops/sec(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)__result - 60.0s 0 85297 1421.6 0.7 0.3 2.6 7.1 30.4 - ~~~ - -### Run the `tpcc` workload - -1. Load the initial schema and data: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload init tpcc \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - -2. Run the workload for 10 minutes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload run tpcc \ - --duration=10m \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - - You'll see per-operation statistics print to standard output every second: - - ~~~ - _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms) - 1s 0 1443.4 1494.8 4.7 9.4 27.3 67.1 transfer - 2s 0 1686.5 1590.9 4.7 8.1 15.2 28.3 transfer - 3s 0 1735.7 1639.0 4.7 7.3 11.5 28.3 transfer - 4s 0 1542.6 1614.9 5.0 8.9 12.1 21.0 transfer - 5s 0 1695.9 1631.1 4.7 7.3 11.5 22.0 transfer - 6s 0 1569.2 1620.8 5.0 8.4 11.5 15.7 transfer - 7s 0 1614.6 1619.9 4.7 8.1 12.1 16.8 transfer - 8s 0 1344.4 1585.6 5.8 10.0 15.2 31.5 transfer - 9s 0 1351.9 1559.5 5.8 10.0 16.8 54.5 transfer - 10s 0 1514.8 1555.0 5.2 8.1 12.1 16.8 transfer - ... - ~~~ - - After the specified duration (10 minutes in this case), the workload will stop and you'll see totals printed to standard output: - - ~~~ - _elapsed___errors_____ops(total)___ops/sec(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)__result - 600.0s 0 823902 1373.2 5.8 5.5 10.0 15.2 209.7 - ~~~ - -### Run the `ycsb` workload - -1. Load the initial schema and data: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload init ycsb \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - -2. Run the workload for 10 minutes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload run ycsb \ - --duration=10m \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - - You'll see per-operation statistics print to standard output every second: - - ~~~ - _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms) - 1s 0 9258.1 9666.6 0.7 1.3 2.0 8.9 read - 1s 0 470.1 490.9 1.7 2.9 4.1 5.0 update - 2s 0 10244.6 9955.6 0.7 1.2 2.0 6.6 read - 2s 0 559.0 525.0 1.6 3.1 6.0 7.3 update - 3s 0 9870.8 9927.4 0.7 1.4 2.4 10.0 read - 3s 0 500.0 516.6 1.6 4.2 7.9 15.2 update - 4s 0 9847.2 9907.3 0.7 1.4 2.4 23.1 read - 4s 0 506.8 514.2 1.6 3.7 7.6 17.8 update - 5s 0 10084.4 9942.6 0.7 1.3 2.1 7.1 read - 5s 0 537.2 518.8 1.5 3.5 10.0 15.2 update - ... - ~~~ - - After the specified duration (10 minutes in this case), the workload will stop and you'll see totals printed to standard output: - - ~~~ - _elapsed___errors_____ops(total)___ops/sec(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)__result - 600.0s 0 4728286 7880.2 1.0 0.9 2.2 5.2 268.4 - ~~~ - -### Customize the frequency and format of per-operation statistics - -To customize the frequency of per-operation statistics, use the `--display-every` flag, with `ns`, `us`, `ms`, `s`, `m`, and `h` as valid [time units](https://en.wikipedia.org/wiki/Orders_of_magnitude_(time)). To customize the format of per-operation statistics, use the `--display-format` flag, with `incremental-json` or `simple` (default) as options. - -1. Load the initial schema and data: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload init ycsb \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - -2. Run the workload for 1 minute, printing the output every 5 seconds as JSON: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload run ycsb \ - --duration=1m \ - --display-every=5s \ - --display-format=incremental-json \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - - ~~~ - {"time":"2019-09-13T03:25:03.950621Z","errs":0,"avgt":8434.5,"avgl":8471.0,"p50l":0.8,"p95l":1.6,"p99l":3.1,"maxl":19.9,"type":"read"} - {"time":"2019-09-13T03:25:03.950621Z","errs":0,"avgt":438.1,"avgl":440.0,"p50l":1.5,"p95l":2.8,"p99l":4.5,"maxl":14.7,"type":"update"} - {"time":"2019-09-13T03:25:08.95061Z","errs":0,"avgt":7610.6,"avgl":8040.8,"p50l":0.8,"p95l":2.0,"p99l":4.2,"maxl":65.0,"type":"read"} - {"time":"2019-09-13T03:25:08.95061Z","errs":0,"avgt":391.8,"avgl":415.9,"p50l":1.6,"p95l":3.5,"p99l":5.8,"maxl":21.0,"type":"update"} - {"time":"2019-09-13T03:25:13.950727Z","errs":0,"avgt":7242.0,"avgl":7774.5,"p50l":0.8,"p95l":2.2,"p99l":4.7,"maxl":75.5,"type":"read"} - {"time":"2019-09-13T03:25:13.950727Z","errs":0,"avgt":382.0,"avgl":404.6,"p50l":1.6,"p95l":4.7,"p99l":10.5,"maxl":24.1,"type":"update"} - ... - ~~~ - - When using `incremental-json`, note that totals are not printed at the end of the workload's duration. - -## See also - -- [`cockroach demo`](cockroach-demo.html) -- [`cockroach` Commands Overview](cockroach-commands.html) -- [Performance Benchmarking with TPC-C](performance-benchmarking-with-tpcc-small.html) diff --git a/src/current/v22.1/cockroachdb-feature-availability.md b/src/current/v22.1/cockroachdb-feature-availability.md deleted file mode 100644 index 6257fcf459f..00000000000 --- a/src/current/v22.1/cockroachdb-feature-availability.md +++ /dev/null @@ -1,203 +0,0 @@ ---- -title: CockroachDB Feature Availability -summary: Learn about the features available in preview and limited access in CockroachDB -toc: true -docs_area: reference.sql -key: experimental-features.html ---- - -Some CockroachDB features are made available in phases prior to being launched in general availability (GA). This page defines the different levels of CockroachDB {{ page.version.version }} feature availability and lists the features in each phase. - -{{site.data.alerts.callout_info}} -This page outlines _feature availability_, which is separate from Cockroach Labs' [Release Support Policy](../releases/release-support-policy.html) or [API Support Policy](api-support-policy.html). -{{site.data.alerts.end}} - -## Feature availability phases - -Phase | Definition | Accessibility -----------------------------------------------+------------+------------- -Private preview | Feature is not production-ready and will not be publicly documented. | Invite-only -[Limited access](#features-in-limited-access) | Feature is production-ready but not available widely because of known limitations and/or because capabilities may change or be added based on feedback. | Opt-in
Contact your Cockroach Labs account team. -[Preview](#features-in-preview) | Feature is production-ready and publicly available. However, this feature may have known limitations and/or capabilities may change or be added based on feedback. | Public -General availability (GA) | Feature is production-ready and publicly available. | Public - -## Features in limited access - -{{site.data.alerts.callout_info}} -**The following features are in limited access** and are only available to enrolled organizations. To enroll your organization, contact your Cockroach Labs account team. These features are subject to change. -{{site.data.alerts.end}} - -### Export logs from CockroachDB {{ site.data.products.dedicated }} clusters - -CockroachDB {{ site.data.products.dedicated }} users can use the [Cloud API](../cockroachcloud/cloud-api.html) to configure [log export](../cockroachcloud/export-logs.html) to [AWS CloudWatch](https://aws.amazon.com/cloudwatch/) or [GCP Cloud Logging](https://cloud.google.com/logging). Once the export is configured, logs will flow from all nodes in all regions of your CockroachDB {{ site.data.products.dedicated }} cluster to your chosen cloud log sink. You can configure log export to redact sensitive log entries, limit log output by severity, and send log entries to specific log group targets by log channel, among others. - -## Features in preview - -This page lists the features that are available in preview in CockroachDB {{ page.version.version }}. These features are subject to change. To share feedback and/or issues, contact [Support](https://support.cockroachlabs.com/hc/en-us). - -### `cockroach` commands - -The table below lists the [`cockroach` commands](cockroach-commands.html) available in preview in CockroachDB. - -Command | Description ---------------------------------------------+------------- -[`cockroach demo`](cockroach-demo.html) | Start a temporary, in-memory CockroachDB cluster, and open an interactive SQL shell to it. -[`cockroach sqlfmt`](cockroach-sqlfmt.html) | Reformat SQL queries for enhanced clarity. - -### `SESSIONS` channel - -The [`SESSIONS`](logging.html#sessions) channel logs SQL session events. This includes client connection and session authentication events, for which logging must be enabled separately. For complete logging of client connections, we recommend enabling both types of events. - -### Super regions - -[Super regions](multiregion-overview.html#super-regions) allow you to define a set of database regions such that schema objects will have all of their replicas stored _only_ in regions that are members of the super region. The primary use case for super regions is data domiciling. - -### Functions and Operators - -The table below lists the SQL functions and operators available in preview in CockroachDB. For more information, see each function's documentation at [Functions and Operators](functions-and-operators.html). - -Function | Description ----------------------------------------------------------------------------------+------------------------------------------------ -[`experimental_strftime`](functions-and-operators.html#date-and-time-functions) | Format time using standard `strftime` notation. -[`experimental_strptime`](functions-and-operators.html#date-and-time-functions) | Format time using standard `strptime` notation. -[`experimental_uuid_v4()`](functions-and-operators.html#id-generation-functions) | Return a UUID. - -### Export metrics from CockroachDB {{ site.data.products.dedicated }} clusters - -CockroachDB {{ site.data.products.dedicated }} users can use the [Cloud API](../cockroachcloud/cloud-api.html) to configure [metrics export](../cockroachcloud/export-metrics.html) to [AWS CloudWatch](https://aws.amazon.com/cloudwatch/) or [Datadog](https://www.datadoghq.com/). Once the export is configured, metrics will flow from all nodes in all regions of your CockroachDB {{ site.data.products.dedicated }} cluster to your chosen cloud metrics sink. - -### Keep SQL audit logs - -Log all queries against a table to a file, for security purposes. For more information, see [`ALTER TABLE ... EXPERIMENTAL_AUDIT`](experimental-audit.html). - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE t EXPERIMENTAL_AUDIT SET READ WRITE; -~~~ - -### Show table fingerprints - -Table fingerprints are used to compute an identification string of an entire table, for the purpose of gauging whether two tables have the same data. This is useful, for example, when restoring a table from backup. - -Example: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW EXPERIMENTAL_FINGERPRINTS FROM TABLE t; -~~~ - -~~~ - index_name | fingerprint -------------+--------------------- - primary | 1999042440040364641 -(1 row) -~~~ - -### Turn on KV event tracing - -Use session tracing (via [`SHOW TRACE FOR SESSION`](show-trace.html)) to report the replicas of all KV events that occur during its execution. - -Example: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SET tracing = on; -> SELECT * from t; -> SET tracing = off; -> SHOW EXPERIMENTAL_REPLICA TRACE FOR SESSION; -~~~ - -~~~ - timestamp | node_id | store_id | replica_id -----------------------------------+---------+----------+------------ - 2018-10-18 15:50:13.345879+00:00 | 3 | 3 | 7 - 2018-10-18 15:50:20.628383+00:00 | 2 | 2 | 26 -~~~ - -### Check for constraint violations with `SCRUB` - -Checks the consistency of [`UNIQUE`](unique.html) indexes, [`CHECK`](check.html) constraints, and more. Partially implemented; see [cockroachdb/cockroach#10425](https://github.com/cockroachdb/cockroach/issues/10425) for details. - -{{site.data.alerts.callout_info}} -This example uses the `users` table from our open-source, fictional peer-to-peer vehicle-sharing application, [MovR](movr.html). -{{site.data.alerts.end}} - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPERIMENTAL SCRUB table movr.users; -~~~ - -~~~ - job_uuid | error_type | database | table | primary_key | timestamp | repaired | details -----------+--------------------------+----------+-------+----------------------------------------------------------+---------------------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - | index_key_decoding_error | movr | users | ('boston','0009eeb5-d779-4bf8-b1bd-8566533b105c') | 2018-10-18 16:00:38.65916 | f | {"error_message": "key ordering did not match datum ordering. IndexDescriptor=ASC", "index_name": "primary", "row_data": {"address": "e'06484 Christine Villages\\nGrantport, TN 01572'", "city": "'boston'", "credit_card": "'4634253150884'", "id": "'0009eeb5-d779-4bf8-b1bd-8566533b105c'", "name": "'Jessica Webb'"}} - | index_key_decoding_error | movr | users | ('los angeles','0001252c-fc16-4006-b6dc-c6b1a0fd1f5b') | 2018-10-18 16:00:38.65916 | f | {"error_message": "key ordering did not match datum ordering. IndexDescriptor=ASC", "index_name": "primary", "row_data": {"address": "e'91309 Warner Springs\\nLake Danielmouth, PR 33400'", "city": "'los angeles'", "credit_card": "'3584736360686445'", "id": "'0001252c-fc16-4006-b6dc-c6b1a0fd1f5b'", "name": "'Rebecca Gibson'"}} - | index_key_decoding_error | movr | users | ('new york','000169a5-e337-4441-b664-dae63e682980') | 2018-10-18 16:00:38.65916 | f | {"error_message": "key ordering did not match datum ordering. IndexDescriptor=ASC", "index_name": "primary", "row_data": {"address": "e'0787 Christopher Highway Apt. 363\\nHamptonmouth, TX 91864-2620'", "city": "'new york'", "credit_card": "'4578562547256688'", "id": "'000169a5-e337-4441-b664-dae63e682980'", "name": "'Christopher Johnson'"}} - | index_key_decoding_error | movr | users | ('paris','00089fc4-e5b1-48f6-9f0b-409905f228c4') | 2018-10-18 16:00:38.65916 | f | {"error_message": "key ordering did not match datum ordering. IndexDescriptor=ASC", "index_name": "primary", "row_data": {"address": "e'46735 Martin Summit\\nMichaelview, OH 10906-5889'", "city": "'paris'", "credit_card": "'5102207609888778'", "id": "'00089fc4-e5b1-48f6-9f0b-409905f228c4'", "name": "'Nicole Fuller'"}} - | index_key_decoding_error | movr | users | ('rome','000209fc-69a1-4dd5-8053-3b5e5769876d') | 2018-10-18 16:00:38.65916 | f | {"error_message": "key ordering did not match datum ordering. IndexDescriptor=ASC", "index_name": "primary", "row_data": {"address": "e'473 Barrera Vista Apt. 890\\nYeseniaburgh, CO 78087'", "city": "'rome'", "credit_card": "'3534605564661093'", "id": "'000209fc-69a1-4dd5-8053-3b5e5769876d'", "name": "'Sheryl Shea'"}} - | index_key_decoding_error | movr | users | ('san francisco','00058767-1e83-4e18-999f-13b5a74d7225') | 2018-10-18 16:00:38.65916 | f | {"error_message": "key ordering did not match datum ordering. IndexDescriptor=ASC", "index_name": "primary", "row_data": {"address": "e'5664 Acevedo Drive Suite 829\\nHernandezview, MI 13516'", "city": "'san francisco'", "credit_card": "'376185496850202'", "id": "'00058767-1e83-4e18-999f-13b5a74d7225'", "name": "'Kevin Turner'"}} - | index_key_decoding_error | movr | users | ('seattle','0002e904-1256-4528-8b5f-abad16e695ff') | 2018-10-18 16:00:38.65916 | f | {"error_message": "key ordering did not match datum ordering. IndexDescriptor=ASC", "index_name": "primary", "row_data": {"address": "e'81499 Samuel Crescent Suite 631\\nLake Christopherborough, PR 50401'", "city": "'seattle'", "credit_card": "'38743493725890'", "id": "'0002e904-1256-4528-8b5f-abad16e695ff'", "name": "'Mark Williams'"}} - | index_key_decoding_error | movr | users | ('washington dc','00007caf-2014-4696-85b0-840e7d8b6db9') | 2018-10-18 16:00:38.65916 | f | {"error_message": "key ordering did not match datum ordering. IndexDescriptor=ASC", "index_name": "primary", "row_data": {"address": "e'4578 Holder Trafficway\\nReynoldsside, IL 23520-7418'", "city": "'washington dc'", "credit_card": "'30454993082943'", "id": "'00007caf-2014-4696-85b0-840e7d8b6db9'", "name": "'Marie Miller'"}} -(8 rows) -~~~ - -### Show range information for a specific row - -The [`SHOW RANGE ... FOR ROW`](show-range-for-row.html) statement shows information about a [range](architecture/overview.html#architecture-range) for a particular row of data. This information is useful for verifying how SQL data maps to underlying ranges, and where the replicas for a range are located. - -### Alter column types - -CockroachDB supports [altering the column types](alter-column.html#altering-column-data-types) of existing tables, with certain limitations. To enable altering column types, set the `enable_experimental_alter_column_type_general` [session variable](show-vars.html) to `true`. - -### Temporary objects - -[Temporary tables](temporary-tables.html), [temporary views](views.html#temporary-views), and [temporary sequences](create-sequence.html#temporary-sequences) are in preview in CockroachDB. If you create too many temporary objects in a session, the performance of DDL operations will degrade. Performance limitations could persist long after creating the temporary objects. For more details, see [cockroachdb/cockroach#46260](https://github.com/cockroachdb/cockroach/issues/46260). - -To enable temporary objects, set the `experimental_enable_temp_tables` [session variable](show-vars.html) to `on`. - -### Password authentication without TLS - -For deployments where transport security is already handled at the infrastructure level (e.g., IPSec with DMZ), and TLS-based transport security is not possible or not desirable, CockroachDB supports delegating transport security to the infrastructure with the flag `--accept-sql-without-tls` for [`cockroach start`](cockroach-start.html#security). - -With this flag, SQL clients can establish a session over TCP without a TLS handshake. They still need to present valid authentication credentials, for example a password in the default configuration. Different authentication schemes can be further configured as per `server.host_based_authentication.configuration`. - -Example: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --user=jpointsman --insecure -~~~ - -~~~ - # Welcome to the CockroachDB SQL shell. - # All statements must be terminated by a semicolon. - # To exit, type: \q. - # - Enter password: -~~~ - -### Core implementation of changefeeds - -The [`EXPERIMENTAL CHANGEFEED FOR`](changefeed-for.html) statement creates a new core changefeed, which streams row-level changes to the client indefinitely until the underlying connection is closed or the changefeed is canceled. A core changefeed can watch one table or multiple tables in a comma-separated list. - -### Changefeed metrics labels - -{% include {{ page.version.version }}/cdc/metrics-labels.md %} - -For usage details, see the [Monitor and Debug Changefeeds](monitor-and-debug-changefeeds.html) page. - -### Google Pub/Sub sink for changefeeds - -Changefeeds can deliver messages to a [Google Cloud Pub/Sub sink](changefeed-sinks.html#google-cloud-pub-sub), which is integrated with Google Cloud Platform. - -### Webhook sink for changefeeds - -Use a [webhook sink](changefeed-sinks.html#webhook-sink) to deliver changefeed messages to an arbitrary HTTP endpoint. - -## See Also - -- [`SHOW {session variable}`](show-vars.html) -- [Functions and Operators](functions-and-operators.html) -- [`ALTER TABLE ... EXPERIMENTAL_AUDIT`](experimental-audit.html) -- [`SHOW TRACE FOR SESSION`](show-trace.html) -- [`SHOW RANGE ... FOR ROW`](show-range-for-row.html) diff --git a/src/current/v22.1/cockroachdb-in-comparison.md b/src/current/v22.1/cockroachdb-in-comparison.md deleted file mode 100644 index d632f831968..00000000000 --- a/src/current/v22.1/cockroachdb-in-comparison.md +++ /dev/null @@ -1,354 +0,0 @@ ---- -title: CockroachDB in Comparison -summary: Learn how CockroachDB compares to other popular databases like PostgreSQL, Cassandra, MongoDB, Google Cloud Spanner, and more. -tags: mongodb, mysql, dynamodb -toc: false -comparison: true -docs_area: get_started ---- - -This page shows you how the key features of CockroachDB stack up against other databases. Hover over the features for their intended meanings. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - CockroachDB
- Database horizontal scale - - tooltip icon - - - Manual Sharding - Add on configuration - Node based, automated read scale, limited write - Node based, automated for both reads and writes - - Manual Sharding - Add on configuration - Node based, automated read scale, limited write - Node based, automated for both reads and writes - Node based, automated for both reads and writes
- Database load balancing (internal) - - tooltip icon - - - Manual - not part of database - None and full copies across regions - Even distribution to optimize storage - - Manual - not part of database - None and full copies across regions - Even distribution to optimize storage - Detailed options to optimize storage, compute, and latency
- Failover - - tooltip icon - - - Manual - not part of database - Automated for reads, limited for writes to one region - Automated for reads, limited guarantees for writes - Fully automated for both reads and writes - - Manual - not part of database - Automated for reads, limited for writes to one region - Automated for reads, limited guarantees for writes - Fully automated for both reads and writes - Fully automated for both reads and writes
- Automated repair and RPO(Recovery Point Objective) - - tooltip icon - - - Manual repair RPO ~1-60 mins - Automated RPO ~1 -5 mins - Manual & automated repair RPO <1 min - "Automated repair RPO <10 sec" - - Manual repair RPO ~1-60 mins - Automated RPO ~1 -5 mins - Manual & automated repair RPO <1 min - "Automated repair RPO <10 sec" - Automated repair RPO = 0 sec
- Distributed reads - - tooltip icon - - - Manual - asynchronous - Yes - - Manual - asynchronous - Yes - Yes
- Distributed transactions - - tooltip icon - - - No - Lightweight transactions only - Yes - - No - Lightweight transactions only - Yes - Yes
- Database isolation levels - - tooltip icon - - - Single region consistent default - Snapshot highest - Serializable - Eventual consistent default - Read uncommitted highest - Snapshot read - Eventual consistent - No transaction isolation guarantees - Default - Snapshot highest - Serializable - - Single region consistent default - Snapshot highest - Serializable - Eventual consistent default - Read uncommitted highest - Snapshot read - Eventual consistent - No transaction isolation guarantees - Default - Snapshot highest - Serializable - Guaranteed consistent default - Serializable highest - Serializable
- Potential data issues (default) - - tooltip icon - - - Phantom reads, non-repeatable reads, write skew - Dirty reads, phantom reads, non-repeatable reads, write skew - Dirty reads, phantom reads, non-repeatable reads, write conflicts - None - Phantom reads, non-repeatable reads - - Phantom reads, non-repeatable reads, write skew - Dirty reads, phantom reads, non-repeatable reads, write skew - Dirty reads, phantom reads, non-repeatable reads, write conflicts - None - Phantom reads, non-repeatable reads - None
- SQL - - tooltip icon - - - Yes - No - Yes - with limitations - - Yes - No - Yes - with limitations - Yes - wire compatible with PostgreSQL
- Database schema change - - tooltip icon - - - Yes - Offline - Online, Active and Dynamic - - Yes - Offline - Online, Active, and Dynamic - Online, Active, and Dynamic
- Cost based optimization - - tooltip icon - - - Yes - No - ? - No - - Yes - No - ? - No - Yes
- Data Geo-partitioning - - tooltip icon - - - No - Yes, object level - Yes - No - - No - Yes, object level - Yes - No - Yes, row level
- Upgrade method - - tooltip icon - - - Offline - Online, rolling - - Offline - Online, rolling - Online, rolling
- Multi-region - - tooltip icon - - - Yes - manual - Yes, but not for writes - Yes, for both reads and writes - - Yes - manual - Yes, but not for writes - Yes, for both reads and writes - Yes for both reads and writes
- Multi-cloud - - tooltip icon - - - No - Yes - - No - Yes - Yes
- - diff --git a/src/current/v22.1/collate.md b/src/current/v22.1/collate.md deleted file mode 100644 index b00c390b6f4..00000000000 --- a/src/current/v22.1/collate.md +++ /dev/null @@ -1,244 +0,0 @@ ---- -title: COLLATE -summary: The COLLATE feature lets you sort strings according to language- and country-specific rules. -toc: true -docs_area: reference.sql ---- - -The `COLLATE` feature lets you sort [`STRING`](string.html) values according to language- and country-specific rules, known as collations. - -Collated strings are important because different languages have [different rules for alphabetic order](https://en.wikipedia.org/wiki/Alphabetical_order#Language-specific_conventions), especially with respect to accented letters. For example, in German accented letters are sorted with their unaccented counterparts, while in Swedish they are placed at the end of the alphabet. A collation is a set of rules used for ordering and usually corresponds to a language, though some languages have multiple collations with different rules for sorting; for example Portuguese has separate collations for Brazilian and European dialects (`pt-BR` and `pt-PT` respectively). - -## Details - -- Operations on collated strings cannot involve strings with a different collation or strings with no collation. However, it is possible to add or overwrite a collation on the fly. - -- Only use the collation feature when you need to sort strings by a specific collation. We recommend this because every time a collated string is constructed or loaded into memory, CockroachDB computes its collation key, whose size is linear in relationship to the length of the collated string, which requires additional resources. - -- Collated strings can be considerably larger than the corresponding uncollated strings, depending on the language and the string content. For example, strings containing the character `é` produce larger collation keys in the French locale than in Chinese. - -- Collated strings that are indexed require additional disk space as compared to uncollated strings. In case of indexed collated strings, collation keys must be stored in addition to the strings from which they are derived, creating a constant factor overhead. - -{{site.data.alerts.callout_danger}} -{% include {{page.version.version}}/sql/add-size-limits-to-indexed-columns.md %} -{{site.data.alerts.end}} - -## Supported collations - -CockroachDB supports collations identified by [Unicode locale identifiers](https://cldr.unicode.org/development/core-specification#h.vgyyng33o798). For example, `en-US` identifies US English, `es` identifies Spanish, and `fr-CA` identifies Canadian French. Collation names are case-insensitive, and hyphens and underscores are interchangeable. - -{{site.data.alerts.callout_info}} -If a hyphen is used in a SQL query, the collation name must be enclosed in double quotes, as single quotes are used for SQL string literals. -{{site.data.alerts.end}} - -A list of supported collations can be found in the `pg_catalog.pg_collation` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT collname from pg_catalog.pg_collation; -~~~ - -~~~ - collname ------------------------ - und - aa - af - ar -... -(95 rows) -~~~ - -CockroachDB supports standard aliases for the collations listed in `pg_collation`. For example, `es-419` (Latin American Spanish) and `zh-Hans` (Simplified Chinese) are supported, but they do not appear in the `pg_collations` table because they are equivalent to the `es` and `zh` collations listed in the table. - -CockroachDB also supports the following Unicode locale extensions: - -- `co` (collation type) -- `ks` (strength) -- `kc` (case level) -- `kb` (backwards second level weight) -- `kn` (numeric) -- `ks` (strength) -- `ka` (alternate handling) - -To use a locale extension, append `-u-` to the base locale name, followed by the extension. For example, `en-US-u-ks-level2` is case-insensitive US English. The `ks` modifier changes the "strength" of the collation, causing it to treat certain classes of characters as equivalent (PostgreSQL calls these "non-deterministic collations"). Setting the `ks` to `level2` makes the collation case-insensitive (for languages that have this concept). - -For more details on locale extensions, see the [Unicode Collation Algorithm](https://en.wikipedia.org/wiki/Unicode_collation_algorithm). - -## Collation versioning - -While changes to collations are rare, they are possible, especially in languages with a large numbers of characters (e.g., Simplified and Traditional Chinese). CockroachDB updates its support with new versions of the Unicode standard every year, but there is currently no way to specify the version of Unicode to use. As a result, it is possible for a collation change to invalidate existing collated string data. To prevent collated data from being invalidated by Unicode changes, we recommend storing data in columns with an uncollated string type, and then using a [computed column](computed-columns.html) for the desired collation. In the event that a collation change produces undesired effects, the computed column can be dropped and recreated. - -## SQL syntax - -Collated strings are used as normal strings in SQL, but have a `COLLATE` clause appended to them. - -- **Column syntax**: `STRING COLLATE `. For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE foo (a STRING COLLATE en PRIMARY KEY); - ~~~ - - {{site.data.alerts.callout_info}}You can also use any of the aliases for STRING.{{site.data.alerts.end}} - -- **Value syntax**: ` COLLATE `. For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO foo VALUES ('dog' COLLATE en); - ~~~ - -## Examples - -### Specify collation for a column - -You can set a default collation for all values in a `STRING` column. - -For example, you can set a column's default collation to German (`de`): - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE de_names (name STRING COLLATE de PRIMARY KEY); -~~~ - -When inserting values into this column, you must specify the collation for every value: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO de_names VALUES ('Backhaus' COLLATE de), ('Bär' COLLATE de), ('Baz' COLLATE de); -~~~ - -The sort will now honor the `de` collation that treats *ä* as *a* in alphabetic sorting: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM de_names ORDER BY name; -~~~ -~~~ - name -+----------+ - Backhaus - Bär - Baz -(3 rows) -~~~ - -### Specify collations with locale extensions - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE nocase_strings (greeting STRING COLLATE "en-US-u-ks-level2"); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO nocase_strings VALUES ('Hello, friend.' COLLATE "en-US-u-ks-level2"), ('Hi. My name is Petee.' COLLATE "en-US-u-ks-level2"); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM nocase_strings WHERE greeting = ('hi. my name is petee.' COLLATE "en-US-u-ks-level2"); -~~~ - -~~~ - greeting -+-----------------------+ - Hi. My name is Petee. -(1 row) -~~~ - -### Order by non-default collation - -You can sort a column using a specific collation instead of its default. - -For example, you receive different results if you order results by German (`de`) and Swedish (`sv`) collations: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM de_names ORDER BY name COLLATE sv; -~~~ -~~~ - name -+----------+ - Backhaus - Baz - Bär -(3 rows) -~~~ - -### Ad-hoc collation casting - -You can cast any string into a collation on the fly. - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT 'A' COLLATE de < 'Ä' COLLATE de; -~~~ -~~~ - ?column? -+----------+ - true -(1 row) -~~~ - -However, you cannot compare values with different collations: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT 'Ä' COLLATE sv < 'Ä' COLLATE de; -~~~ -~~~ -pq: unsupported comparison operator: < -~~~ - -You can also use casting to remove collations from values. - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT CAST(name AS STRING) FROM de_names ORDER BY name; -~~~ -~~~ - name -+----------+ - Backhaus - Baz - Bär -(3 rows) -~~~ - -### Show collation for strings - -You can use the `pg_collation_for` [built-in function](functions-and-operators.html#string-and-byte-functions), or its alternative [syntax form](functions-and-operators.html#special-syntax-forms) `COLLATION FOR`, to return the locale name of a collated string. - -For example: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT pg_collation_for('Bär' COLLATE de); -~~~ - -~~~ - pg_collation_for -+------------------+ - de -(1 row) -~~~ - -This is equivalent to: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT COLLATION FOR ('Bär' COLLATE de); -~~~ - -~~~ - pg_collation_for -+------------------+ - de -(1 row) -~~~ - -## See also - -[Data Types](data-types.html) diff --git a/src/current/v22.1/column-families.md b/src/current/v22.1/column-families.md deleted file mode 100644 index 17f34291de2..00000000000 --- a/src/current/v22.1/column-families.md +++ /dev/null @@ -1,95 +0,0 @@ ---- -title: Column Families -summary: A column family is a group of columns in a table that are stored as a single key-value pair in the underlying key-value store. -toc: true -docs_area: develop ---- - -A column family is a group of columns in a table that are stored as a single key-value pair in the [underlying key-value store](architecture/storage-layer.html). Column families reduce the number of keys stored in the key-value store, resulting in improved performance during [`INSERT`](insert.html), [`UPDATE`](update.html), and [`DELETE`](delete.html) operations. - -This page explains how CockroachDB organizes columns into families as well as cases in which you might want to manually override the default behavior. - - [Secondary indexes](indexes.html) respect the column family definitions applied to tables. When you define a secondary index, CockroachDB breaks the secondary index key-value pairs into column families, according to the family and stored column configurations. - -## Default behavior - -When a table is created, all columns are stored as a single column family. - -This default approach ensures efficient key-value storage and performance in most cases. However, when frequently updated columns are grouped with seldom updated columns, the seldom updated columns are nonetheless rewritten on every update. Especially when the seldom updated columns are large, it's more performant to split them into a distinct family. - -## Manual override - -### Assign column families on table creation - -To manually assign a column family on [table creation](create-table.html), use the `FAMILY` keyword. - -For example, let's say we want to create a table to store an immutable blob of data (`data BYTES`) with a last accessed timestamp (`last_accessed TIMESTAMP`). Because we know that the blob of data will never get updated, we use the `FAMILY` keyword to break it into a separate column family: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE test ( - id INT PRIMARY KEY, - last_accessed TIMESTAMP, - data BYTES, - FAMILY f1 (id, last_accessed), - FAMILY f2 (data) -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE test; -~~~ - -~~~ - table_name | create_statement --------------+------------------------------------------------- - test | CREATE TABLE test ( - | id INT8 NOT NULL, - | last_accessed TIMESTAMP NULL, - | data BYTES NULL, - | CONSTRAINT "primary" PRIMARY KEY (id ASC), - | FAMILY f1 (id, last_accessed), - | FAMILY f2 (data) - | ) -(1 row) -~~~ - - -### Assign column families when adding columns - -When using the [`ALTER TABLE .. ADD COLUMN`](add-column.html) statement to add a column to a table, you can assign the column to a new or existing column family. - -- Use the `CREATE FAMILY` keyword to assign a new column to a **new family**. For example, the following would add a `data2 BYTES` column to the `test` table above and assign it to a new column family: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER TABLE test ADD COLUMN data2 BYTES CREATE FAMILY f3; - ~~~ - -- Use the `FAMILY` keyword to assign a new column to an **existing family**. For example, the following would add a `name STRING` column to the `test` table above and assign it to family `f1`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER TABLE test ADD COLUMN name STRING FAMILY f1; - ~~~ - -- Use the `CREATE IF NOT EXISTS FAMILY` keyword to assign a new column to an **existing family or, if the family doesn't exist, to a new family**. For example, the following would assign the new column to the existing `f1` family; if that family didn't exist, it would create a new family and assign the column to it: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER TABLE test ADD COLUMN name STRING CREATE IF NOT EXISTS FAMILY f1; - ~~~ - -- If a column is added to a table and the family is not specified, it will be added to the first column family. For example, the following would add the new column to the `f1` family, since that is the first column family: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER TABLE test ADD COLUMN last_name STRING; - ~~~ - -## See also - -- [`CREATE TABLE`](create-table.html) -- [`ADD COLUMN`](add-column.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v22.1/comment-on.md b/src/current/v22.1/comment-on.md deleted file mode 100644 index b697d7f6edf..00000000000 --- a/src/current/v22.1/comment-on.md +++ /dev/null @@ -1,215 +0,0 @@ ---- -title: COMMENT ON -summary: The COMMENT ON statement associates comments to databases, tables, columns, or indexes. -toc: true -docs_area: reference.sql ---- - -The `COMMENT ON` [statement](sql-statements.html) associates comments to [databases](create-database.html), [tables](create-table.html), [columns](add-column.html), or [indexes](indexes.html). - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Required privileges - -The user must have the `CREATE` [privilege](security-reference/authorization.html#managing-privileges) on the object they are commenting on. - -## Synopsis - -
-{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/comment.html %} -
- -## Parameters - - Parameter | Description -------------|-------------- -`database_name` | The name of the [database](create-database.html) on which you are commenting. -`schema_name` | The name of the [schema](create-schema.html) on which you are commenting. -`table_name` | The name of the [table](create-table.html) on which you are commenting. -`column_name` | The name of the [column](add-column.html) on which you are commenting. -`table_index_name` | The name of the [index](indexes.html) on which you are commenting. -`comment_text` | The comment ([`STRING`](string.html)) you are associating to the object. You can remove a comment by replacing the string with `NULL`. - -## Examples - -{% include {{page.version.version}}/sql/movr-statements.md %} - -### Add a comment to a database - -To add a comment to a database: - -{% include_cached copy-clipboard.html %} -~~~ sql -> COMMENT ON DATABASE movr IS 'This database contains information about users, vehicles, and rides.'; -~~~ - -To view database comments, use [`SHOW DATABASES`](show-databases.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW DATABASES WITH COMMENT; -~~~ - -~~~ - database_name | owner | primary_region | regions | survival_goal | comment -----------------+-------+----------------+---------+---------------+----------------------------------------------------------------------- - defaultdb | root | NULL | {} | NULL | NULL - movr | demo | NULL | {} | NULL | This database contains information about users, vehicles, and rides. - postgres | root | NULL | {} | NULL | NULL - system | node | NULL | {} | NULL | NULL -(4 rows) -~~~ - -### Add a comment to a table - -To add a comment to a table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> COMMENT ON TABLE vehicles IS 'This table contains information about vehicles registered with MovR.'; -~~~ - -To view table comments, use [`SHOW TABLES`](show-tables.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM movr WITH COMMENT; -~~~ - -~~~ - table_name | comment -+----------------------------+----------------------------------------------------------------------+ - users | - vehicles | This table contains information about vehicles registered with MovR. - rides | - vehicle_location_histories | - promo_codes | - user_promo_codes | -(6 rows) -~~~ - - You can also view comments on a table with [`SHOW CREATE`](show-create.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE vehicles; -~~~ - -~~~ - table_name | create_statement --------------+------------------------------------------------------------------------------------------------------ - vehicles | CREATE TABLE vehicles ( - | id UUID NOT NULL, - | city VARCHAR NOT NULL, - | type VARCHAR NULL, - | owner_id UUID NULL, - | creation_time TIMESTAMP NULL, - | status VARCHAR NULL, - | current_location VARCHAR NULL, - | ext JSONB NULL, - | CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - | CONSTRAINT fk_city_ref_users FOREIGN KEY (city, owner_id) REFERENCES users(city, id), - | INDEX vehicles_auto_index_fk_city_ref_users (city ASC, owner_id ASC), - | FAMILY "primary" (id, city, type, owner_id, creation_time, status, current_location, ext) - | ); - | COMMENT ON TABLE vehicles IS 'This table contains information about vehicles registered with MovR.' -(1 row) -~~~ - -### Add a comment to a column - -To add a comment to a column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> COMMENT ON COLUMN users.credit_card IS 'This column contains user payment information.'; -~~~ - -To view column comments, use [`SHOW COLUMNS`](show-columns.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM users WITH COMMENT; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden | comment -+-------------+-----------+-------------+----------------+-----------------------+-----------+-----------+------------------------------------------------+ - id | UUID | false | NULL | | {primary} | false | NULL - city | VARCHAR | false | NULL | | {primary} | false | NULL - name | VARCHAR | true | NULL | | {primary} | false | NULL - address | VARCHAR | true | NULL | | {primary} | false | NULL - credit_card | VARCHAR | true | NULL | | {primary} | false | This column contains user payment information. -(5 rows) -~~~ - -### Add a comment to an index - -Suppose we [create an index](create-index.html) on the `name` column of the `users` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE INDEX ON users(name); -~~~ - -To add a comment to the index: - -{% include_cached copy-clipboard.html %} -~~~ sql -> COMMENT ON INDEX users_name_idx IS 'This index improves performance on queries that filter by name.'; -~~~ - -To view column comments, use [`SHOW INDEXES ... WITH COMMENT`](show-index.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW INDEXES FROM users WITH COMMENT; -~~~ - -~~~ - table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit | comment --------------+----------------+------------+--------------+-------------+-----------+---------+----------+------------------------------------------------------------------ - users | users_pkey | false | 1 | city | ASC | false | false | NULL - users | users_pkey | false | 2 | id | ASC | false | false | NULL - users | users_pkey | false | 3 | name | N/A | true | false | NULL - users | users_pkey | false | 4 | address | N/A | true | false | NULL - users | users_pkey | false | 5 | credit_card | N/A | true | false | NULL - users | users_name_idx | true | 1 | name | ASC | false | false | This index improves performance on queries that filter by name. - users | users_name_idx | true | 2 | city | ASC | false | true | This index improves performance on queries that filter by name. - users | users_name_idx | true | 3 | id | ASC | false | true | This index improves performance on queries that filter by name. -~~~ - -### Remove a comment from a database - -To remove a comment from a database: - -{% include_cached copy-clipboard.html %} -~~~ sql -> COMMENT ON DATABASE movr IS NULL; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW DATABASES WITH COMMENT; -~~~ - -~~~ - database_name | owner | primary_region | regions | survival_goal | comment -----------------+-------+----------------+---------+---------------+---------- - defaultdb | root | NULL | {} | NULL | NULL - movr | demo | NULL | {} | NULL | NULL - postgres | root | NULL | {} | NULL | NULL - system | node | NULL | {} | NULL | NULL -(4 rows) -~~~ - -## See also - -- [`CREATE DATABASE`](create-database.html) -- [`CREATE TABLE`](create-table.html) -- [`ADD COLUMN`](add-column.html) -- [`CREATE INDEX`](create-index.html) -- [`SHOW TABLES`](show-tables.html) -- [SQL Statements](sql-statements.html) -- [dBeaver](dbeaver.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v22.1/commit-transaction.md b/src/current/v22.1/commit-transaction.md deleted file mode 100644 index 301111828c4..00000000000 --- a/src/current/v22.1/commit-transaction.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -title: COMMIT -summary: Commit a transaction with the COMMIT statement in CockroachDB. -toc: true -docs_area: reference.sql ---- - -The `COMMIT` [statement](sql-statements.html) commits the current [transaction](transactions.html) or, when using [advanced client-side transaction retries](advanced-client-side-transaction-retries.html), clears the connection to allow new transactions to begin. - -When using [advanced client-side transaction retries](advanced-client-side-transaction-retries.html), statements issued after [`SAVEPOINT`](savepoint.html) are committed when [`RELEASE SAVEPOINT`](release-savepoint.html) is issued instead of `COMMIT`. However, you must still issue a `COMMIT` statement to clear the connection for the next transaction. - -For non-retryable transactions, if statements in the transaction [generated any errors](transactions.html#error-handling), `COMMIT` is equivalent to `ROLLBACK`, which aborts the transaction and discards *all* updates made by its statements. - - -## Synopsis - -
-{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/commit_transaction.html %} -
- -## Required privileges - -No [privileges](security-reference/authorization.html#managing-privileges) are required to commit a transaction. However, privileges are required for each statement within a transaction. - -## Aliases - -In CockroachDB, `END` is an alias for the `COMMIT` statement. - -## Example - -### Commit a transaction - -How you commit transactions depends on how your application handles [transaction retries](transactions.html#transaction-retries). - -#### Client-side retryable transactions - -When using [advanced client-side transaction retries](advanced-client-side-transaction-retries.html), statements are committed by [`RELEASE SAVEPOINT`](release-savepoint.html). `COMMIT` itself only clears the connection for the next transaction. - -{% include_cached copy-clipboard.html %} -~~~ sql -> BEGIN; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SAVEPOINT cockroach_restart; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> UPDATE products SET inventory = 0 WHERE sku = '8675309'; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO orders (customer, sku, status) VALUES (1001, '8675309', 'new'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> RELEASE SAVEPOINT cockroach_restart; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> COMMIT; -~~~ - -{{site.data.alerts.callout_danger}}This example assumes you're using client-side intervention to handle transaction retries.{{site.data.alerts.end}} - -#### Automatically retried transactions - -If you are using transactions that CockroachDB will [automatically retry](transactions.html#automatic-retries) (i.e., all statements sent in a single batch), commit the transaction with `COMMIT`. - -{% include_cached copy-clipboard.html %} -~~~ sql -> BEGIN; UPDATE products SET inventory = 100 WHERE = '8675309'; UPDATE products SET inventory = 100 WHERE = '8675310'; COMMIT; -~~~ - -## See also - -- [Transactions](transactions.html) -- [`BEGIN`](begin-transaction.html) -- [`RELEASE SAVEPOINT`](release-savepoint.html) -- [`ROLLBACK`](rollback-transaction.html) -- [`SAVEPOINT`](savepoint.html) -- [`SHOW SAVEPOINT STATUS`](show-savepoint-status.html) diff --git a/src/current/v22.1/common-errors.md b/src/current/v22.1/common-errors.md deleted file mode 100644 index 88afaf3be5a..00000000000 --- a/src/current/v22.1/common-errors.md +++ /dev/null @@ -1,214 +0,0 @@ ---- -title: Common Errors and Solutions -summary: Understand and resolve common error messages written to stderr or logs. -toc: false -docs_area: manage ---- - -This page helps you understand and resolve error messages written to `stderr` or your [logs](logging-overview.html). - -| Topic | Message -|----------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -| Client connection | [`connection refused`](#connection-refused) -| Client connection | [`node is running secure mode, SSL connection required`](#node-is-running-secure-mode-ssl-connection-required) -| Transaction retries | [`restart transaction`](#restart-transaction) -| Node startup | [`node belongs to cluster but is attempting to connect to a gossip network for cluster `](#node-belongs-to-cluster-cluster-id-but-is-attempting-to-connect-to-a-gossip-network-for-cluster-another-cluster-id) -| Node configuration | [`clock synchronization error: this node is more than 500ms away from at least half of the known nodes`](#clock-synchronization-error-this-node-is-more-than-500ms-away-from-at-least-half-of-the-known-nodes) -| Node configuration | [`open file descriptor limit of is under the minimum required `](#open-file-descriptor-limit-of-number-is-under-the-minimum-required-number) -| Replication | [`replicas failing with "0 of 1 store with an attribute matching []; likely not enough nodes in cluster"`](#replicas-failing-with-0-of-1-store-with-an-attribute-matching-likely-not-enough-nodes-in-cluster) -| Split failed | [`split failed while applying backpressure; are rows updated in a tight loop?`](#split-failed-while-applying-backpressure-are-rows-updated-in-a-tight-loop) -| Deadline exceeded | [`context deadline exceeded`](#context-deadline-exceeded) -| Incremental backups | [`protected ts verification error...`](#protected-ts-verification-error) -| Ambiguous results | [`result is ambiguous`](#result-is-ambiguous) -| Import key collision | [`checking for key collisions: ingested key collides with an existing one`](#checking-for-key-collisions-ingested-key-collides-with-an-existing-one) | -| SQL memory budget exceeded | [`memory budget exceeded`](#memory-budget-exceeded) - -## connection refused - -This message indicates a client is trying to connect to a node that is either not running or is not listening on the specified interfaces (i.e., hostname or port). - -To resolve this issue, do one of the following: - -- If the node hasn't yet been started, [start the node](cockroach-start.html). -- If you specified a [`--listen-addr` and/or a `--advertise-addr` flag](cockroach-start.html#networking) when starting the node, you must include the specified IP address/hostname and port with all other [`cockroach` commands](cockroach-commands.html) or change the `COCKROACH_HOST` environment variable. - -If you're not sure what the IP address/hostname and port values might have been, you can look in the node's [logs](logging-overview.html). - -If necessary, you can also [shut down](node-shutdown.html) and then restart the node: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start [flags] -~~~ - -## node is running secure mode, SSL connection required - -This message indicates that the cluster is using TLS encryption to protect network communication, and the client is trying to open a connection without using the required TLS certificates. - -To resolve this issue, use the [`cockroach cert create-client`](cockroach-cert.html) command to generate a client certificate and key for the user trying to connect. For a secure deployment tutorial, including generating security certificates and connecting clients, see [Manual Deployment](manual-deployment.html). - -## restart transaction - -Messages with the error code `40001` and the string `restart transaction` indicate that a transaction failed because it conflicted with another concurrent or recent transaction accessing the same data. The transaction needs to be retried by the client. For more information about how to implement client-side retries, see [client-side retry handling](transactions.html#client-side-intervention). - -For more information about the different types of transaction retry errors such as "retry write too old", "read within uncertainty interval", etc., see the [Transaction Retry Error Reference](transaction-retry-error-reference.html). - -## node belongs to cluster \ but is attempting to connect to a gossip network for cluster \ - -This message usually indicates that a node tried to connect to a cluster, but the node is already a member of a different cluster. This is determined by metadata in the node's data directory. To resolve this issue, do one of the following: - -- Choose a different directory to store the CockroachDB data: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start [flags] --store=[new directory] --join=[cluster host]:26257 - ~~~ - -- Remove the existing directory and start a node joining the cluster again: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ rm -r cockroach-data/ - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start [flags] --join=[cluster host]:26257 - ~~~ - -## clock synchronization error: this node is more than 500ms away from at least half of the known nodes - -This error indicates that a node has spontaneously shut down because it detected that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default). CockroachDB requires moderate levels of [clock synchronization](recommended-production-settings.html#clock-synchronization) to preserve data consistency, so the node shutting down in this way avoids the risk of consistency anomalies. - -To prevent this from happening, you should run clock synchronization software on each node. For guidance on synchronizing clocks, see the tutorial for your deployment environment: - -Environment | Recommended Approach -------------|--------------------- -[Manual](deploy-cockroachdb-on-premises.html#step-1-synchronize-clocks) | Use NTP with Google's external NTP service. -[AWS](deploy-cockroachdb-on-aws.html#step-3-synchronize-clocks) | Use the Amazon Time Sync Service. -[Azure](deploy-cockroachdb-on-microsoft-azure.html#step-3-synchronize-clocks) | Disable Hyper-V time synchronization and use NTP with Google's external NTP service. -[Digital Ocean](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks) | Use NTP with Google's external NTP service. -[GCE](deploy-cockroachdb-on-google-cloud-platform.html#step-3-synchronize-clocks) | Use NTP with Google's internal NTP service. - -## open file descriptor limit of \ is under the minimum required \ - -CockroachDB can use a large number of open file descriptors, often more than is available by default. This message indicates that the machine on which a CockroachDB node is running is under CockroachDB's recommended limits. - -For more details on CockroachDB's file descriptor limits and instructions on increasing the limit on various platforms, see [File Descriptors Limit](recommended-production-settings.html#file-descriptors-limit). - -## replicas failing with "0 of 1 store with an attribute matching []; likely not enough nodes in cluster - -### When running a single-node cluster - -When running a single-node CockroachDB cluster, an error about replicas failing will eventually show up in the node's log files, for example: - -~~~ shell -E160407 09:53:50.337328 storage/queue.go:511 [replicate] 7 replicas failing with "0 of 1 store with an attribute matching []; likely not enough nodes in cluster" -~~~ - -This happens because CockroachDB expects three nodes by default. If you do not intend to add additional nodes, you can stop this error by using [`ALTER RANGE ... CONFIGURE ZONE`](configure-zone.html) to update your default zone configuration to expect only one node: - -{% include_cached copy-clipboard.html %} -~~~ shell -# Insecure cluster: -$ cockroach sql --execute="ALTER RANGE default CONFIGURE ZONE USING num_replicas=1;" --insecure -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -# Secure cluster: -$ cockroach sql --execute="ALTER RANGE default CONFIGURE ZONE USING num_replicas=1;" --certs-dir=[path to certs directory] -~~~ - -The zone's replica count is reduced to 1. For more information, see [`ALTER RANGE ... CONFIGURE ZONE`](configure-zone.html) and [Configure Replication Zones](configure-replication-zones.html). - -### When running a multi-node cluster - -When running a multi-node CockroachDB cluster, if you see an error like the one above about replicas failing, some nodes might not be able to talk to each other. For recommended actions, see [Cluster Setup Troubleshooting](cluster-setup-troubleshooting.html#replication-issues). - -## split failed while applying backpressure; are rows updated in a tight loop? - -In CockroachDB, a table row is stored on disk as a key-value pair. Whenever the row is updated, CockroachDB also stores a distinct version of the key-value pair to enable concurrent request processing while guaranteeing consistency (see [multi-version concurrency control (MVCC)](architecture/storage-layer.html#mvcc)). All versions of a key-value pair belong to a larger ["range"](architecture/overview.html#architecture-range) of the total key space, and the historical versions remain until the garbage collection period defined by the `gc.ttlseconds` variable in the applicable [zone configuration](configure-replication-zones.html#gc-ttlseconds) has passed (25 hours by default). Once a range reaches a size threshold (512 MiB by default), CockroachDB splits the range into two ranges. However, this message indicates that a range cannot be split as intended. - -One possible cause is that the range consists only of MVCC version data due to a row being repeatedly updated, and the range cannot be split because doing so would spread MVCC versions for a single row across multiple ranges. - -To resolve this issue, make sure you are not repeatedly updating a single row. If frequent updates of a row are necessary, consider one of the following: - -- Reduce the `gc.ttlseconds` variable in the applicable [zone configuration](configure-replication-zones.html#gc-ttlseconds) to reduce the garbage collection period and prevent such a large build-up of historical values. -- If a row contains large columns that are not being updated with other columns, put the large columns in separate [column families](column-families.html). - -## context deadline exceeded - -This message occurs when a component of CockroachDB gives up because it was relying on another component that has not behaved as expected, for example, another node dropped a network connection. To investigate further, look in the node's logs for the primary failure that is the root cause. - -## protected ts verification error - -Messages that begin with `protected ts verification error…` indicate that your [incremental backup](take-full-and-incremental-backups.html#incremental-backups) failed because the data you are trying to backup was garbage collected. This happens when incremental backups are taken less frequently than the garbage collection periods for any of the objects in the base backup. For example, if your incremental backups recur daily, but the garbage collection period of one table in your backup is less than one day, all of your incremental backups will fail. - -The error message will specify which part of your backup is causing the failure. For example, `range span: /Table/771` indicates that table `771` is part of the problem. You can then inspect this table by running [`SELECT * FROM crdb_internal.tables WHERE id=771`](select-clause.html). You can also run [`SHOW ZONE CONFIGURATIONS`](show-zone-configurations.html) and look for any `gc.ttlseconds` values that are set lower than your incremental backup frequency. - -To resolve this issue, take a new [full backup](take-full-and-incremental-backups.html) after doing either of the following: - -- Increase the garbage collection period by [configuring the `gc.ttlseconds` replication zone variable](configure-replication-zones.html#gc-ttlseconds). For example, we recommend setting the GC TTL to a time interval **greater** than the sum of `incremental_backup_interval` + `expected_runtime_of_full_backup` + `buffer_for_slowdowns`. To estimate the expected full backup runtime, it is necessary to perform testing or verify the past performance through the [jobs table](ui-jobs-page.html#jobs-list). -- [Increase the frequency of incremental backups](manage-a-backup-schedule.html). - -## result is ambiguous - -In a distributed system, some errors can have ambiguous results. For -example, if you receive a `connection closed` error while processing a -`COMMIT` statement, you cannot tell whether the transaction -successfully committed or not. These errors are possible in any -database, but CockroachDB is somewhat more likely to produce them than -other databases because ambiguous results can be caused by failures -between the nodes of a cluster. These errors are reported with the -PostgreSQL error code `40003` (`statement_completion_unknown`) and the -message `result is ambiguous`. - -Ambiguous errors can be caused by nodes crashing, network failures, or -timeouts. If you experience a lot of these errors when things are -otherwise stable, look for performance issues. Note that ambiguity is -only possible for the last statement of a transaction (`COMMIT` or -`RELEASE SAVEPOINT`) or for statements outside a transaction. If a connection drops during a transaction that has not yet tried to commit, the transaction will definitely be aborted. - -In general, you should handle ambiguous errors the same way as -`connection closed` errors. If your transaction is -[idempotent](https://en.wikipedia.org/wiki/Idempotence#Computer_science_meaning), -it is safe to retry it on ambiguous errors. `UPSERT` operations are -typically idempotent, and other transactions can be written to be -idempotent by verifying the expected state before performing any -writes. Increment operations such as `UPDATE my_table SET x=x+1 WHERE -id=$1` are typical examples of operations that cannot easily be made -idempotent. If your transaction is not idempotent, then you should -decide whether to retry or not based on whether it would be better for -your application to apply the transaction twice or return an error to -the user. - -## checking for key collisions: ingested key collides with an existing one - -When importing into an existing table with [`IMPORT INTO`](import-into.html), this error occurs because the rows in the import file conflict with an existing primary key or another [`UNIQUE`](unique.html) constraint on the table. The import will fail as a result. `IMPORT INTO` is an insert-only statement, so you cannot use it to update existing rows. To update rows in an existing table, use the [`UPDATE`](update.html) statement. - -## memory budget exceeded - -This message usually indicates that `--max-sql-memory`, the memory allocated to the SQL layer, was exceeded by the operation referenced in the error. A `memory budget exceeded` error also suggests that a node is close to an [OOM crash](cluster-setup-troubleshooting.html#out-of-memory-oom-crash), which might be prevented by failing the query. - -{% include {{ page.version.version }}/prod-deployment/resolution-untuned-query.md %} - -Increasing `--max-sql-memory` can alleviate `memory budget exceeded` errors. However, allocating more `--max-sql-memory` can also increase the probability of [OOM crashes](cluster-setup-troubleshooting.html#out-of-memory-oom-crash) relative to the amount of memory currently provisioned on each node. For guidance on configuring this flag, see [Cache and SQL memory size](recommended-production-settings.html#cache-and-sql-memory-size). - -For [disk-spilling operations](vectorized-execution.html#disk-spilling-operations) such as hash joins that are memory-intensive, another solution is to increase the `sql.distsql.temp_storage.workmem` [cluster setting](cluster-settings.html) to allocate more memory to the operation before it spills to disk and likely consumes more memory. This improves the performance of the query, though at a possible reduction in the concurrency of the workload. - -For example, if a query contains a hash join that requires 128 MiB of memory before spilling to disk, values of `sql.distsql.temp_storage.workmem=64MiB` and `--max-sql-memory=1GiB` allow the query to run with a concurrency of 16 without errors. The 17th concurrent instance will exceed `--max-sql-memory` and produce a `memory budget exceeded` error. Increasing `sql.distsql.temp_storage.workmem` to `128MiB` reduces the workload concurrency to 8, but allows the queries to finish without spilling to disk. For more information, see [Disk-spilling operations](vectorized-execution.html#disk-spilling-operations). - -{{site.data.alerts.callout_info}} -{% include {{ page.version.version }}/prod-deployment/resolution-oom-crash.md %} -{{site.data.alerts.end}} - -## Something else? - -Try searching the rest of our docs for answers or using our other [support resources](support-resources.html), including: - -- [CockroachDB Community Forum](https://forum.cockroachlabs.com) -- [CockroachDB Community Slack](https://cockroachdb.slack.com) -- [StackOverflow](http://stackoverflow.com/questions/tagged/cockroachdb) -- [CockroachDB Support Portal](https://support.cockroachlabs.com) -- [Transaction retry error reference](transaction-retry-error-reference.html) diff --git a/src/current/v22.1/common-issues-to-monitor.md b/src/current/v22.1/common-issues-to-monitor.md deleted file mode 100644 index 681450e1f18..00000000000 --- a/src/current/v22.1/common-issues-to-monitor.md +++ /dev/null @@ -1,294 +0,0 @@ ---- -title: Common Issues to Monitor -summary: How to configure and monitor your CockroachDB cluster to prevent commonly encountered issues. -toc: true -docs_area: manage ---- - -This page summarizes how to configure and monitor your cluster to prevent issues commonly encountered with: - -- [CPU](#cpu) -- [Memory](#memory) -- [Storage and disk I/O](#storage-and-disk-i-o) - -## CPU - -{% include {{ page.version.version }}/prod-deployment/terminology-vcpu.md %} - -Issues with CPU most commonly arise when there is insufficient CPU to support the scale of the workload. - -### CPU planning - -Provision enough CPU to support your operational and workload concurrency requirements: - -{% capture cpu_recommendation_minimum %}For cluster stability, Cockroach Labs recommends a _minimum_ of {% include {{ page.version.version }}/prod-deployment/provision-cpu.md threshold='minimum' %}, and strongly recommends no fewer than {% include {{ page.version.version }}/prod-deployment/provision-cpu.md threshold='absolute_minimum' %} per node. In a cluster with too few CPU resources, foreground client workloads will compete with the cluster's background maintenance tasks.{% endcapture %} - -{% capture cpu_recommendation_maximum %}Cockroach Labs does not extensively test clusters with more than {% include {{ page.version.version }}/prod-deployment/provision-cpu.md threshold='maximum' %} per node. This is the recommended _maximum_ threshold.{% endcapture %} - -| Category | Recommendations | -|----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| CPU |
  • {{ cpu_recommendation_minimum | strip_newlines }}
  • {{ cpu_recommendation_maximum | strip_newlines }}
  • Use larger VMs to handle temporary workload spikes and processing hot spots.
  • Use connection pooling to manage workload concurrency. {% include {{ page.version.version }}/prod-deployment/prod-guidance-connection-pooling.md %} For more details, refer to [Sizing connection pools](connection-pooling.html#sizing-connection-pools).
  • For additional CPU recommendations, refer to [Recommended Production Settings](recommended-production-settings.html#sizing).
| - -### CPU monitoring - -Monitor possible signs of CPU starvation: - -| Parameter | Description | -|-----------------------------------------------|--------------------------------------------------------------------------------------| -| [Service latency](#service-latency) | The time between when the cluster receives a query and finishes executing the query. | -| [CPU usage](#cpu-usage) | The CPU consumption by the CockroachDB node process. | -| [Workload concurrency](#workload-concurrency) | The number of SQL statements being executed on the cluster at the same time. | -| [LSM health](#lsm-health) | The health of the persistent stores. | -| [Node health](#node-health) | The operational status of the nodes. | - -#### Service latency - -Degradation in SQL response time is the most common symptom of CPU starvation. It can also be a symptom of [insufficient disk I/O](#storage-and-disk-i-o). - -- The [**Service Latency: SQL Statements, 99th percentile**](ui-sql-dashboard.html#service-latency-sql-99th-percentile) and [**Service Latency: SQL Statements, 90th percentile**](ui-sql-dashboard.html#service-latency-sql-90th-percentile) graphs on the SQL dashboard show the time in nanoseconds between when the cluster [receives a query and finishes executing the query](architecture/sql-layer.html). This time does not include returning results to the client. - -If latencies are consistently high, check for: - -- High [CPU usage](#cpu-usage). -- An [I/O bottleneck](#disk-iops). - -#### CPU usage - -Compaction on the [storage layer](architecture/storage-layer.html) uses CPU to run concurrent worker threads. - -- The [**CPU Percent**](ui-overload-dashboard.html#cpu-percent) graph on the Hardware and Overload dashboards shows the CPU consumption by the CockroachDB process, and excludes other processes on the node. - - {% include {{ page.version.version }}/prod-deployment/healthy-cpu-percent.md %} - -If CPU usage is high, check whether [workload concurrency](#workload-concurrency) is exceeding CPU resources. - -#### Workload concurrency - -The number of concurrent active SQL statements should be proportionate to your provisioned CPU. - -- The [**SQL Statements**](ui-sql-dashboard.html#sql-statements) graph on the Overview and SQL dashboards shows the 10-second average of `SELECT`, `UPDATE`, `INSERT`, and `DELETE` statements being executed per second on the cluster or node. The latest QPS value for the cluster is also displayed with the **Queries per second** counter on the Metrics page. - - {% include {{ page.version.version }}/prod-deployment/healthy-workload-concurrency.md %} - -If workload concurrency exceeds CPU resources, you will observe: - -- High [CPU usage](#cpu-usage). -- Degradation in [SQL response time](#service-latency). -- Over time, an [unhealthy LSM](#lsm-health) and [cluster instability](#node-health). - -{{site.data.alerts.callout_success}} -{% include {{ page.version.version }}/prod-deployment/resolution-excessive-concurrency.md %} -{{site.data.alerts.end}} - -#### LSM health - -Issues at the storage layer, including an [inverted LSM](architecture/storage-layer.html#inverted-lsms) and high [read amplification](architecture/storage-layer.html#read-amplification), can be observed when compaction falls behind due to insufficient CPU or excessively high [recovery and rebalance rates](cluster-setup-troubleshooting.html#excessive-snapshot-rebalance-and-recovery-rates). - -- The [**LSM L0 Health**](ui-overload-dashboard.html#lsm-l0-health) graph on the Overload dashboard shows the health of the [persistent stores](architecture/storage-layer.html), which are implemented as log-structured merge (LSM) trees. Level 0 is the highest level of the LSM tree and consists of files containing the latest data written to the [Pebble storage engine](cockroach-start.html#storage-engine). For more information about LSM levels and how LSMs work, see [Log-structured Merge-trees](architecture/storage-layer.html#log-structured-merge-trees). - - {% include {{ page.version.version }}/prod-deployment/healthy-lsm.md %} - - {{site.data.alerts.callout_info}} - An unhealthy LSM can be caused by other factors, including [under-provisioned storage](#storage-and-disk-i-o). To correlate this symptom with CPU starvation, check for high [CPU usage](#cpu-usage) and excessive [workload concurrency](#workload-concurrency). - {{site.data.alerts.end}} - -- The **Read Amplification** graph on the [Storage Dashboard](ui-storage-dashboard.html) shows the average number of disk reads per logical SQL statement, also known as the [read amplification](architecture/storage-layer.html#read-amplification) factor. - - {% include {{ page.version.version }}/prod-deployment/healthy-read-amplification.md %} - -- The `STORAGE` [logging channel](logging-overview.html#logging-channels) indicates an unhealthy LSM with the following: - - - Frequent `compaction` status messages. - - - High-read-amplification warnings, e.g., `sstables (read amplification = 54)`. - -{{site.data.alerts.callout_success}} -{% include {{ page.version.version }}/prod-deployment/resolution-inverted-lsm.md %} -{{site.data.alerts.end}} - -#### Node health - -If [issues at the storage layer](#lsm-health) remain unresolved, affected nodes will miss their liveness heartbeats, causing the cluster to lose nodes and eventually become unresponsive. - -- The [**Node status**](ui-cluster-overview-page.html#node-status) on the Cluster Overview page indicates whether nodes are online (`LIVE`) or have crashed (`SUSPECT` or `DEAD`). - -- The `/health` endpoint of the [Cluster API](cluster-api.html) returns a `500` error when a node is unhealthy. - -- A [Prometheus alert](monitoring-and-alerting.html#node-is-down) can notify when a node has been down for 15 minutes or more. - -If nodes have shut down, this can also be caused by [insufficient storage capacity](#storage-capacity). - -{% include {{ page.version.version }}/prod-deployment/cluster-unavailable-monitoring.md %} - -## Memory - -CockroachDB is [resilient](demo-fault-tolerance-and-recovery.html) to node crashes. However, frequent node restarts caused by [out-of-memory (OOM) crashes](cluster-setup-troubleshooting.html#out-of-memory-oom-crash) can impact cluster stability and performance. - -### Memory planning - -Provision enough memory and allocate an appropriate portion for data caching: - -| Category | Recommendations | -|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Memory |
  • Provision at least {% include {{ page.version.version }}/prod-deployment/provision-memory.md %}.
  • {% include {{ page.version.version }}/prod-deployment/prod-guidance-cache-max-sql-memory.md %} For more details, see the [Production Checklist](recommended-production-settings.html#cache-and-sql-memory-size).
  • {% include {{ page.version.version }}/prod-deployment/prod-guidance-disable-swap.md %}
  • See additional memory recommendations in the [Production Checklist](recommended-production-settings.html#memory).
  • | - -### Memory monitoring - -Monitor memory usage and node behavior for [OOM errors](cluster-setup-troubleshooting.html#out-of-memory-oom-crash): - -| Metric or event | Description | -|-------------------------------------------------|---------------------------------------------| -| [Node process restarts](#node-process-restarts) | Nodes restarting after crashes. | -| [SQL memory usage](#sql-memory-usage) | The memory allocated to the SQL layer. | -| [Database memory usage](#database-memory-usage) | The memory in use by CockroachDB processes. | - -#### Node process restarts - -CockroachDB attempts to restart nodes after they crash. Nodes that frequently restart following an abrupt process exit may point to an underlying memory issue. - -- The [**Node status**](ui-cluster-overview-page.html#node-status) on the Cluster Overview page indicates whether nodes are online (`LIVE`) or have crashed (`SUSPECT` or `DEAD`). - -- When deploying on [Kubernetes](kubernetes-overview.html), the `kubectl get pods` output contains a `RESTARTS` column that tracks the number of restarts for each CockroachDB pod. - -- The `OPS` [logging channel](logging-overview.html#logging-channels) will record a [`node_restart` event](eventlog.html#node_restart) whenever a node rejoins the cluster after being offline. - -- A [Prometheus alert](monitoring-and-alerting.html#node-is-restarting-too-frequently) can notify when a node has restarted more than once in the last 10 minutes. - -##### Verify OOM errors - -If you observe nodes frequently restarting, confirm that the crashes are caused by [OOM errors](cluster-setup-troubleshooting.html#out-of-memory-oom-crash): - -- Monitor `dmesg` to determine if a node crashed because it ran out of memory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo dmesg | grep -iC 3 "cockroach" - ~~~ - - The following output indicates the node crashed due to insufficient memory: - - ~~~ shell - $ host kernel: Out of Memory: Killed process (cockroach). - ~~~ - -- When deploying on [Kubernetes](kubernetes-overview.html), run `kubectl logs {pod-name}` and look for OOM errors in the log messages. - -{{site.data.alerts.callout_success}} -{% include {{ page.version.version }}/prod-deployment/resolution-oom-crash.md %} -{{site.data.alerts.end}} - -If you confirm that nodes are crashing due to OOM errors, also check whether [SQL queries](#sql-memory-usage) may be responsible. - -#### SQL memory usage - -An untuned SQL query can consume significant resources and impact the performance of other workloads. - -- The [**SQL Memory**](ui-sql-dashboard.html#sql-memory) graph on the SQL dashboard shows the current amount of memory in KiB allocated to the SQL layer. - - {% include {{ page.version.version }}/prod-deployment/healthy-sql-memory.md %} - -- The "active query dump", enabled by default with the `diagnostics.active_query_dumps.enabled` [cluster setting](cluster-settings.html), is a record of anonymized active queries that is written to disk when a node is detected to be under memory pressure. - - You can use the active query dump to correlate specific queries to OOM crashes. Active query dumps have the filename `activequeryprof.{date-and-time}.csv` and are found in the `heap_profiler` directory in the configured [logging directory](configure-logs.html#logging-directory). They are also included when running [`cockroach debug zip`](cockroach-debug-zip.html). - -- A `SHOW STATEMENTS` statement can [identify long-running queries](manage-long-running-queries.html#identify-long-running-queries) on the cluster that may be consuming excessive memory. - -- A [`memory budget exceeded`](common-errors.html#memory-budget-exceeded) error in the logs indicates that `--max-sql-memory`, the memory allocated to the SQL layer, was exceeded by the operation referenced in the error. For guidance on resolving this issue, see [Common Errors](common-errors.html#memory-budget-exceeded). - -{{site.data.alerts.callout_success}} -{% include {{ page.version.version }}/prod-deployment/resolution-untuned-query.md %} -{{site.data.alerts.end}} - -{{site.data.alerts.callout_danger}} -{% include {{page.version.version}}/sql/add-size-limits-to-indexed-columns.md %} -{{site.data.alerts.end}} - -#### Database memory usage - -CockroachDB memory usage includes both accounted memory, such as the amount allocated to `--cache` and `--max-sql-memory`; and unaccounted memory, such as uncollected Go garbage and process overhead. - -- The [**Memory Usage**](ui-runtime-dashboard.html#memory-usage) graph on the Runtime dashboard shows the total memory in use by CockroachDB processes. The RSS (resident set size) metric represents actual CockroachDB memory usage from the OS/Linux/pod point of view. The Go and CGo metrics represent memory allocation and total usage from a CockroachDB point of view. - - {% include {{ page.version.version }}/prod-deployment/healthy-crdb-memory.md %} - -For more context on acceptable memory usage, see [Suspected memory leak](cluster-setup-troubleshooting.html#suspected-memory-leak). - -## Storage and disk I/O - -The cluster will underperform if storage is not provisioned or configured correctly. This can lead to further issues such as [disk stalls](cluster-setup-troubleshooting.html#disk-stalls) and node shutdown. - -### Storage and disk planning - -Provision enough storage capacity for CockroachDB data, and configure your volumes to maximize disk I/O: - -| Category | Recommendations | -|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Storage |
    • Provision volumes with {% include {{ page.version.version }}/prod-deployment/provision-storage.md %}.
    • {% include {{ page.version.version }}/prod-deployment/prod-guidance-store-volume.md %}
    • {% include {{ page.version.version }}/prod-deployment/prod-guidance-log-volume.md %}
    • See additional storage recommendations in the [Production Checklist](recommended-production-settings.html#storage).
    | -| Disk I/O |
    • Disks must be able to achieve {% include {{ page.version.version }}/prod-deployment/provision-disk-io.md %}.
    • {% include {{ page.version.version }}/prod-deployment/prod-guidance-lvm.md %}
    • See additional disk I/O recommendations in the [Production Checklist](recommended-production-settings.html#disk-i-o).
    | | - -### Storage and disk monitoring - -Monitor storage capacity and disk performance: - -| Metric or event | Description | -|---------------------------------------------------|----------------------------------------------------------------------------------------------------------------------| -| [Storage capacity](#storage-capacity) | The available and used disk capacity in the CockroachDB [store](cockroach-start.html#store). | -| [Disk IOPS](#disk-iops) | The I/O requests per second. | -| [Node heartbeat latency](#node-heartbeat-latency) | The time between [node liveness](cluster-setup-troubleshooting.html#node-liveness-issues) heartbeats. | -| [Command commit latency](#command-commit-latency) | The speed at which [Raft commands](architecture/replication-layer.html) are being committed by nodes in the cluster. | - -#### Storage capacity - -CockroachDB requires disk space in order to accept writes and report node liveness. When a node runs out of disk space, it [shuts down](#node-health) and cannot be restarted until space is freed up. - -- The [**Capacity**](ui-storage-dashboard.html#capacity) graph on the Overview and Storage dashboards shows the available and used disk capacity in the CockroachDB [store](cockroach-start.html#store). - - {% include {{ page.version.version }}/prod-deployment/healthy-storage-capacity.md %} - -- A [Prometheus alert](monitoring-and-alerting.html#node-is-running-low-on-disk-space) can notify when a node has less than 15% of free space remaining. - -{{site.data.alerts.callout_success}} -Ensure that you [provision sufficient storage](recommended-production-settings.html#storage). If storage is correctly provisioned and is running low, CockroachDB automatically creates an emergency ballast file that can free up space. For details, see [Disks filling up](cluster-setup-troubleshooting.html#disks-filling-up). -{{site.data.alerts.end}} - -#### Disk IOPS - -Insufficient disk I/O can cause [poor SQL performance](#service-latency) and potentially [disk stalls](cluster-setup-troubleshooting.html#disk-stalls). - -- The [**Disk Ops In Progress**](ui-hardware-dashboard.html#disk-ops-in-progress) graph on the Hardware dashboard shows the number of disk reads and writes in queue. - - {% include {{ page.version.version }}/prod-deployment/healthy-disk-ops-in-progress.md %} - -- The Linux tool `iostat` (part of `sysstat`) can be used to monitor IOPS. In the device status output, `avgqu-sz` corresponds to the **Disk Ops In Progress** metric. If service times persist in double digits on any node, this means that your storage device is saturated and is likely under-provisioned or misconfigured. - -With insufficient disk I/O, you may also see: - -- Degradation in [SQL response time](#service-latency). -- An [unhealthy LSM](#lsm-health). - -#### Node heartbeat latency - -Because each node needs to update a liveness record on disk, maxing out disk bandwidth can cause liveness heartbeats to be missed. - -- The [**Node Heartbeat Latency: 99th percentile**](ui-distributed-dashboard.html#node-heartbeat-latency-99th-percentile) and [**Node Heartbeat Latency: 90th percentile**](ui-distributed-dashboard.html#node-heartbeat-latency-90th-percentile) graphs on the [Distributed Dashboard](ui-distributed-dashboard.html) show the time elapsed between [node liveness](cluster-setup-troubleshooting.html#node-liveness-issues) heartbeats. - - {% include {{ page.version.version }}/prod-deployment/healthy-node-heartbeat-latency.md %} - -#### Command commit latency - -- The **Command Commit Latency: 50th percentile** and **Command Commit Latency: 99th percentile** graphs on the [Storage dashboard](ui-storage-dashboard.html) show how quickly [Raft commands](architecture/replication-layer.html) are being committed by nodes in the cluster. This is a good signal of I/O load. - - {% include {{ page.version.version }}/prod-deployment/healthy-command-commit-latency.md %} - -## See also - -- [Production Checklist](recommended-production-settings.html) -- [Monitoring and Alerting](monitoring-and-alerting.html) -- [Common Errors and Solutions](common-errors.html) -- [Operational FAQs](operational-faqs.html) -- [Performance Tuning Recipes](performance-recipes.html) -- [Troubleshoot Cluster Setup](cluster-setup-troubleshooting.html) -- [Troubleshoot SQL Behavior](query-behavior-troubleshooting.html) -- [Admission Control](admission-control.html) -- [Metrics](metrics.html) -- [Alerts Page](../cockroachcloud/alerts-page.html) (CockroachDB {{ site.data.products.dedicated }}) diff --git a/src/current/v22.1/common-table-expressions.md b/src/current/v22.1/common-table-expressions.md deleted file mode 100644 index e10a97524ee..00000000000 --- a/src/current/v22.1/common-table-expressions.md +++ /dev/null @@ -1,326 +0,0 @@ ---- -title: Common Table Expressions (WITH Queries) -summary: Common table expressions (CTEs) simplify the definition and use of subqueries -toc: true -docs_area: reference.sql ---- - -A _common table expression_ (CTE), also called a `WITH` query, provides a shorthand name to a possibly complex [subquery](subqueries.html) before it is used in a larger query context. This improves the readability of SQL code. - -You can use CTEs in combination with [`SELECT` clauses](select-clause.html) and [`INSERT`](insert.html), [`DELETE`](delete.html), [`UPDATE`](update.html), and [`UPSERT`](upsert.html) data-modifying statements. - -## Synopsis - -
    -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/with_clause.html %} -
    - -## Parameters - -Parameter | Description -----------|------------ -`table_alias_name` | The name to use to refer to the common table expression from the accompanying query or statement. -`name` | A name for one of the columns in the newly defined common table expression. -`preparable_stmt` | The statement or subquery to use as common table expression. -`MATERIALIZED`/`NOT MATERIALIZED` | Override the [optimizer](cost-based-optimizer.html)'s decision to materialize (i.e., store the results) of the common table expression. By default, the optimizer materializes the common table expression if it affects other objects in the database, or if it is used in the query multiple times. - -## Overview - -{{site.data.alerts.callout_info}} -The examples on this page use MovR, a fictional vehicle-sharing application, to demonstrate CockroachDB SQL statements. To follow along, run [`cockroach demo`](cockroach-demo.html) from the command line to start a temporary, in-memory cluster with the `movr` dataset preloaded. - -For more information about the MovR example application and dataset, see [MovR: A Global Vehicle-sharing App](movr.html). -{{site.data.alerts.end}} - -A query or statement of the form `WITH x AS (y) z` creates the -temporary table name `x` for the results of the subquery `y`, to be -reused in the context of `z`. - -For example: - -{% include_cached copy-clipboard.html %} -~~~ sql -> WITH r AS (SELECT * FROM rides WHERE revenue > 98) - SELECT * FROM users AS u, r WHERE r.rider_id = u.id; -~~~ - -~~~ - id | city | name | address | credit_card | id | city | vehicle_city | rider_id | vehicle_id | start_address | end_address | start_time | end_time | revenue ----------------------------------------+---------------+------------------+--------------------------------+-------------+--------------------------------------+---------------+---------------+--------------------------------------+--------------------------------------+-----------------------------------+---------------------------+---------------------------+---------------------------+---------- - ae147ae1-47ae-4800-8000-000000000022 | amsterdam | Tyler Dalton | 88194 Angela Gardens Suite 94 | 4443538758 | bbe76c8b-4395-4000-8000-00000000016f | amsterdam | amsterdam | ae147ae1-47ae-4800-8000-000000000022 | aaaaaaaa-aaaa-4800-8000-00000000000a | 45295 Brewer View Suite 52 | 62188 Jade Causeway | 2018-12-17 03:04:05+00:00 | 2018-12-17 13:04:05+00:00 | 99.00 - c7ae147a-e147-4000-8000-000000000027 | paris | Tina Miller | 97521 Mark Extensions | 8880478663 | d5810624-dd2f-4800-8000-0000000001a1 | paris | paris | c7ae147a-e147-4000-8000-000000000027 | cccccccc-cccc-4000-8000-00000000000c | 47713 Reynolds Mountains Suite 39 | 1417 Stephanie Villages | 2018-12-17 03:04:05+00:00 | 2018-12-18 22:04:05+00:00 | 99.00 - 75c28f5c-28f5-4400-8000-000000000017 | san francisco | William Wood | 36021 Steven Cove Apt. 89 | 5669281259 | 8ac08312-6e97-4000-8000-00000000010f | san francisco | san francisco | 75c28f5c-28f5-4400-8000-000000000017 | 77777777-7777-4800-8000-000000000007 | 84407 Tony Crest | 55336 Jon Manors | 2018-12-10 03:04:05+00:00 | 2018-12-11 13:04:05+00:00 | 99.00 - 8a3d70a3-d70a-4000-8000-00000000001b | san francisco | Jessica Martinez | 96676 Jennifer Knolls Suite 91 | 1601930189 | 7d70a3d7-0a3d-4000-8000-0000000000f5 | san francisco | san francisco | 8a3d70a3-d70a-4000-8000-00000000001b | 77777777-7777-4800-8000-000000000007 | 78978 Stevens Ramp Suite 8 | 7340 Alison Field Apt. 44 | 2018-12-19 03:04:05+00:00 | 2018-12-21 10:04:05+00:00 | 99.00 - 47ae147a-e147-4000-8000-00000000000e | washington dc | Patricia Herrera | 80588 Perez Camp | 6812041796 | 4083126e-978d-4000-8000-00000000007e | washington dc | washington dc | 47ae147a-e147-4000-8000-00000000000e | 44444444-4444-4400-8000-000000000004 | 33055 Julie Dale Suite 93 | 17280 Jill Drives | 2019-01-01 03:04:05+00:00 | 2019-01-01 14:04:05+00:00 | 99.00 -(5 rows) -~~~ - -In this example, the `WITH` clause defines the temporary name `r` for -the subquery over `rides`, and that name becomes a table name -for use in any [table expression](table-expressions.html) of the -subsequent `SELECT` clause. - -This query is equivalent to, but simpler to read than: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users AS u, (SELECT * FROM rides WHERE revenue > 98) AS r - WHERE r.rider_id = u.id; -~~~ - -It is also possible to define multiple common table expressions -simultaneously with a single `WITH` clause, separated by commas. Later -subqueries can refer to earlier subqueries by name. For example, the -following query is equivalent to the two preceding examples: - -{% include_cached copy-clipboard.html %} -~~~ sql -> WITH r AS (SELECT * FROM rides WHERE revenue > 98), - results AS (SELECT * FROM users AS u, r WHERE r.rider_id = u.id) - SELECT * FROM results; -~~~ - -In this example, the second CTE `results` refers to the first CTE `r` -by name. The final query refers to the CTE `results`. - -## Nested `WITH` clauses - -You can use a `WITH` clause in a subquery and a `WITH` clause within another `WITH` clause. For example: - -{% include_cached copy-clipboard.html %} -~~~ sql -> WITH u AS - (SELECT * FROM - (WITH u_tab AS (SELECT * FROM users) SELECT * FROM u_tab)) - SELECT * FROM u; -~~~ - -When analyzing [table expressions](table-expressions.html) that -mention a CTE name, CockroachDB will choose the CTE definition that is -closest to the table expression. For example: - -{% include_cached copy-clipboard.html %} -~~~ sql -> WITH - u AS (SELECT * FROM users), - v AS (WITH u AS (SELECT * from vehicles) SELECT * FROM u) - SELECT * FROM v; -~~~ - -In this example, the inner subquery `SELECT * FROM v` will select from -table `vehicles` (closest `WITH` clause), not from table `users`. - -{{site.data.alerts.callout_info}} - CockroachDB does not support nested `WITH` clauses containing [data-modifying statements](#data-modifying-statements). `WITH` clauses containing data-modifying statements must be at the top level of the query. -{{site.data.alerts.end}} - -## Data-modifying statements - -You can use a [data-modifying statement](sql-statements.html#data-manipulation-statements) (`INSERT`, `DELETE`, -etc.) as a common table expression, as long as the `WITH` clause containing the data-modifying statement is at the top level of the query. - -For example: - -{% include_cached copy-clipboard.html %} -~~~ sql -> WITH final_code AS - (INSERT INTO promo_codes(code, description, rules) - VALUES ('half_off', 'Half-price ride!', '{"type": "percent_discount", "value": "50%"}'), ('free_ride', 'Free ride!', '{"type": "percent_discount", "value": "100%"}') - returning rules) - SELECT rules FROM final_code; -~~~ - -~~~ - rules -+-----------------------------------------------+ - {"type": "percent_discount", "value": "50%"} - {"type": "percent_discount", "value": "100%"} -(2 rows) -~~~ - -If the `WITH` clause containing the data-modifying statement is at a lower level, the statement results in an error: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT (WITH final_code AS - (INSERT INTO promo_codes(code, description, rules) - VALUES ('half_off', 'Half-price ride!', '{"type": "percent_discount", "value": "50%"}'), ('free_ride', 'Free ride!', '{"type": "percent_discount", "value": "100%"}') - returning rules) - SELECT rules FROM final_code); -~~~ - -~~~ -ERROR: WITH clause containing a data-modifying statement must be at the top level -SQLSTATE: 0A000 -~~~ - -{{site.data.alerts.callout_info}} -If a common table expression contains -a data-modifying statement (INSERT, DELETE, -etc.), the modifications are performed fully even if only part -of the results are used, e.g., with LIMIT. -See Data writes in subqueries for details. -{{site.data.alerts.end}} - -## Reference multiple common table expressions - -You can reference multiple CTEs in a single query using a `WITH` operator. - -For example: - -{% include_cached copy-clipboard.html %} -~~~ sql -> WITH - users_ny AS (SELECT name, id FROM users WHERE city='new york'), - vehicles_ny AS (SELECT type, id, owner_id FROM vehicles WHERE city='new york') - SELECT * FROM users_ny JOIN vehicles_ny ON users_ny.id = vehicles_ny.owner_id; -~~~ - -~~~ - name | id | type | id | owner_id -+------------------+--------------------------------------+------------+--------------------------------------+--------------------------------------+ - James Hamilton | 051eb851-eb85-4ec0-8000-000000000001 | skateboard | 00000000-0000-4000-8000-000000000000 | 051eb851-eb85-4ec0-8000-000000000001 - Catherine Nelson | 147ae147-ae14-4b00-8000-000000000004 | scooter | 11111111-1111-4100-8000-000000000001 | 147ae147-ae14-4b00-8000-000000000004 -(2 rows) -~~~ - -In this single query, you define two CTEs and then reference them in a table join. - -## Recursive common table expressions - -[Recursive common table expressions](https://en.wikipedia.org/wiki/Hierarchical_and_recursive_queries_in_SQL#Common_table_expression) are common table expressions that contain subqueries that refer to their own output. - -Recursive CTE definitions take the following form: - -~~~ -WITH RECURSIVE () AS ( - - [UNION | UNION ALL] - -) - -~~~ - -To write a recursive CTE: - -1. Add the `RECURSIVE` keyword directly after the `WITH` operator in the CTE definition, and before the CTE name. -1. Define an initial, non-recursive subquery. This subquery defines the initial values of the CTE. -1. Add the `UNION` or `UNION ALL` keyword after the initial subquery. The `UNION` variant deduplicates rows. -1. Define a recursive subquery that references its own output. This subquery can also reference the CTE name, unlike the initial subquery. -1. Write a parent query that evaluates the results of the CTE. - -CockroachDB evaluates recursive CTEs as follows: - -1. The initial query is evaluated. Its results are stored to rows in the CTE and copied to a temporary, working table. This working table is updated across iterations of the recursive subquery. -1. The recursive subquery is evaluated iteratively on the contents of the working table. The results of each iteration replace the contents of the working table. The results are also stored to rows of the CTE. The recursive subquery iterates until no results are returned. - -{{site.data.alerts.callout_info}} -Recursive subqueries must eventually return no results, or the query will run indefinitely. -{{site.data.alerts.end}} - -### Example - -The following recursive CTE calculates the factorial of the numbers 0 through 9: - -{% include_cached copy-clipboard.html %} -~~~ sql -WITH RECURSIVE cte (n, factorial) AS ( - VALUES (0, 1) -- initial subquery - UNION ALL - SELECT n+1, (n+1)*factorial FROM cte WHERE n < 9 -- recursive subquery -) -SELECT * FROM cte; -~~~ - -~~~ - n | factorial -+---+-----------+ - 0 | 1 - 1 | 1 - 2 | 2 - 3 | 6 - 4 | 24 - 5 | 120 - 6 | 720 - 7 | 5040 - 8 | 40320 - 9 | 362880 -(10 rows) -~~~ - -The initial subquery (`VALUES (0, 1)`) initializes the working table with the values `0` for the `n` column and `1` for the `factorial` column. The recursive subquery (`SELECT n+1, (n+1)*factorial FROM cte WHERE n < 9`) evaluates over the initial values of the working table and replaces its contents with the results. It then iterates over the contents of the working table, replacing its contents at each iteration, until `n` reaches `9`, when the [`WHERE` clause](select-clause.html#filter-rows) evaluates as false. - -If no `WHERE` clause were defined in the example, the recursive subquery would always return results and loop indefinitely, resulting in an error: - -{% include_cached copy-clipboard.html %} -~~~ sql -WITH RECURSIVE cte (n, factorial) AS ( - VALUES (0, 1) -- initial subquery - UNION ALL - SELECT n+1, (n+1)*factorial FROM cte -- recursive subquery with no WHERE clause -) -SELECT * FROM cte; -~~~ - -~~~ -ERROR: integer out of range -SQLSTATE: 22003 -~~~ - -If you are unsure if your recursive subquery will loop indefinitely, you can limit the results of the CTE with the [`LIMIT`](limit-offset.html) keyword. For example, if you remove the `WHERE` clause from the factorial example, you can use `LIMIT` to limit the results and avoid the `integer out of range` error: - -{% include_cached copy-clipboard.html %} -~~~ sql -WITH RECURSIVE cte (n, factorial) AS ( - VALUES (0, 1) -- initial subquery - UNION ALL - SELECT n+1, (n+1)*factorial FROM cte -- recursive subquery -) -SELECT * FROM cte LIMIT 10; -~~~ - -~~~ - n | factorial -+---+-----------+ - 0 | 1 - 1 | 1 - 2 | 2 - 3 | 6 - 4 | 24 - 5 | 120 - 6 | 720 - 7 | 5040 - 8 | 40320 - 9 | 362880 -(10 rows) -~~~ - -While this practice works for testing and debugging, Cockroach Labs does not recommend it in production. - -## Correlated common table expressions - -If a common table expression is contained in a subquery, the CTE can reference columns defined outside of the subquery. This is called a _correlated common table expression_. For example, in the following query, the expression `(SELECT 1 + x)` references `x` in the outer scope. - -{% include_cached copy-clipboard.html %} -~~~sql -SELECT - * - FROM (VALUES (1), (2)) AS v(x), - LATERAL (SELECT * FROM (WITH foo(incrementedx) AS (SELECT 1 + x) SELECT * FROM foo)) -~~~ -~~~ - x | incrementedx -----+--------------- - 1 | 2 - 2 | 3 -(2 rows) -~~~ - -CTEs containing statements (`INSERT`, `UPSERT`, `UPDATE`, `DELETE`) that modify data can appear only at the upper level, so they **cannot** be correlated. - -## See also - -- [Subqueries](subqueries.html) -- [Selection Queries](selection-queries.html) -- [Table Expressions](table-expressions.html) -- [`EXPLAIN`](explain.html) diff --git a/src/current/v22.1/community-tooling.md b/src/current/v22.1/community-tooling.md deleted file mode 100644 index 8f92b8bfb03..00000000000 --- a/src/current/v22.1/community-tooling.md +++ /dev/null @@ -1,86 +0,0 @@ ---- -title: Third-Party Tools Supported by the Community -summary: Learn about third-party software that works with CockroachDB. -toc: true -docs_area: reference.third_party_support ---- - -The following tools have been tested or developed by the CockroachDB community, but are not officially supported by Cockroach Labs. - -If you encounter problems with using these tools, please contact the maintainer of the tool with details. - -{{site.data.alerts.callout_success}} -If you have a tested or developed a third-party tool with CockroachDB, and would like it listed on this page, please [open a pull request to our docs GitHub repository](https://github.com/cockroachdb/docs/edit/master/v21.2/community-tooling.md). -{{site.data.alerts.end}} - -## Drivers and data access frameworks - -### C++ - -- [libpqxx](https://github.com/cockroachdb/community-tooling-samples/tree/main/cxx) - -### Elixir - -- [Postgrex](https://hexdocs.pm/postgrex/Postgrex.html) - - [Example of connecting to CockroachDB {{ site.data.products.serverless }} using Postgrex](https://github.com/devalexandre/elixir-cockroach) - -### Go - -- [sqlx](http://jmoiron.github.io/sqlx/) - -### Java - -- [JDBI](https://jdbi.org/) -- [clojure.java.jdbc](https://github.com/cockroachdb/community-tooling-samples/tree/main/clojure) - -### PHP - -- [php-pgsql](https://github.com/cockroachdb/community-tooling-samples/tree/main/php) - -### PowerShell - -- [Npgsql](https://blog.ervits.com/2020/03/exploring-cockroachdb-with-jupyter.html) - -### R - -- [RPostgres](https://blog.ervits.com/2020/02/exploring-cockroachdb-with-r-and.html) - -### Rust - -- [tokio_postgres](https://docs.rs/tokio-postgres/latest/tokio_postgres) - -### Other - -### Other - -- [Apache Hop (Incubating)](https://hop.apache.org) - -## Visualization tools - -- [Beekeeper Studio](https://www.beekeeperstudio.io/db/cockroachdb-client/) -- [DbVisualizer](https://www.cdata.com/kb/tech/cockroachdb-jdbc-dbv.rst) -- [Navicat for PostgreSQL](https://www.navicat.com/en/products/navicat-for-postgresql)/[Navicat Premium](https://www.navicat.com/en/products/navicat-premium) -- [Pgweb](http://sosedoff.github.io/pgweb/) -- [Postico](https://eggerapps.at/postico/) -- [TablePlus](https://tableplus.com/blog/2018/06/best-cockroachdb-gui-client-tableplus.html) - -## Schema migration tools - -- [SchemaHero](https://schemahero.io/databases/cockroachdb/connecting/) -- [DbUp](https://github.com/DbUp/DbUp/issues/464#issuecomment-895503849) -- [golang-migrate](https://github.com/golang-migrate/migrate/tree/master/database/cockroachdb) -- [db-migrate](https://db-migrate.readthedocs.io/en/latest/) - -## Connection pooling tools - -- [PGBouncer](https://dzone.com/articles/using-pgbouncer-with-cockroachdb) - -## IAM tools - -- [Vault](https://www.vaultproject.io/docs/configuration/storage/cockroachdb) - -## See also - -- [Build an App with CockroachDB](example-apps.html) -- [Install a Driver or ORM Framework](install-client-drivers.html) -- [Third-Party Tools Supported by Cockroach Labs](third-party-database-tools.html) diff --git a/src/current/v22.1/computed-columns.md b/src/current/v22.1/computed-columns.md deleted file mode 100644 index 34dae07a761..00000000000 --- a/src/current/v22.1/computed-columns.md +++ /dev/null @@ -1,106 +0,0 @@ ---- -title: Computed Columns -summary: A computed column exposes data generated by an expression included in the column definition. -toc: true -docs_area: develop ---- - -A _computed column_ exposes data generated from other columns by a [scalar expression](scalar-expressions.html) included in the column definition. - - - -A _stored computed column_ (set with the `STORED` SQL keyword) is calculated when a row is inserted or updated, and stores the resulting value of the scalar expression in the primary index similar to a non-computed column. - - - -A _virtual computed column_ (set with the `VIRTUAL` SQL keyword) is not stored, and the value of the scalar expression is computed at query-time as needed. - -## Why use computed columns? - -Computed columns are especially useful when used with [`JSONB`](jsonb.html) columns or [secondary indexes](indexes.html). - -- **JSONB** columns are used for storing semi-structured `JSONB` data. When the table's primary information is stored in `JSONB`, it's useful to index a particular field of the `JSONB` document. In particular, computed columns allow for the following use case: a two-column table with a `PRIMARY KEY` column and a `payload` JSONB column, whose primary key is computed from a field of the `payload` column. This alleviates the need to manually separate your primary keys from your JSON blobs. For more information, see [Create a table with a `JSONB` column and a stored computed column](#create-a-table-with-a-jsonb-column-and-a-stored-computed-column). - -- **Secondary indexes** can be created on computed columns, which is especially useful when a table is frequently sorted. See [Create a table with a secondary index on a computed column](#create-a-table-with-a-secondary-index-on-a-computed-column). - -## Considerations - -Computed columns: - -- Cannot be used to generate other computed columns. -- Behave like any other column, with the exception that they cannot be written to directly. -- Are mutually exclusive with [`DEFAULT`](default-value.html) and [`ON UPDATE`](create-table.html#on-update-expressions) expressions. - -Virtual computed columns: - -- Are not stored in the table's primary index. -- Are recomputed as the column data in the expression changes. -- Cannot be used as part of a `FAMILY` definition, in `CHECK` constraints, or in `FOREIGN KEY` constraints. -- Cannot be a [foreign key](foreign-key.html) reference. -- Cannot be stored in indexes. -- Can be index columns. - -Once a computed column is created, you cannot directly alter the formula. To make modifications to a computed column's formula, see [Alter the formula for a computed column](#alter-the-formula-for-a-computed-column). - -## Define a computed column - -To define a stored computed column, use the following syntax: - -~~~ -column_name AS () STORED -~~~ - -To define a virtual computed column, use the following syntax: - -~~~ -column_name AS () VIRTUAL -~~~ - -Parameter | Description -----------|------------ -`column_name` | The [name](keywords-and-identifiers.html#identifiers) of the computed column. -`` | The [data type](data-types.html) of the computed column. -`` | The [immutable](functions-and-operators.html#function-volatility) [scalar expression](scalar-expressions.html) used to compute column values. You cannot use functions such as `now()` or `nextval()` that are not immutable. -`STORED` | _(Required for stored computed columns)_ The computed column is stored alongside other columns. -`VIRTUAL`| _(Required for virtual columns)_ The computed column is virtual, meaning the column data is not stored in the table's primary index. - -For compatibility with PostgreSQL, CockroachDB also supports creating store computed columns with the syntax `column_name GENERATED ALWAYS AS () STORED`. - -## Examples - -### Create a table with a stored computed column - -{% include {{ page.version.version }}/computed-columns/simple.md %} - -### Create a table with a `JSONB` column and a stored computed column - -{% include {{ page.version.version }}/computed-columns/jsonb.md %} - -### Create a virtual computed column using `JSONB` data - -{% include {{ page.version.version }}/computed-columns/virtual.md %} - -### Create a table with a secondary index on a computed column - -{% include {{ page.version.version }}/computed-columns/secondary-index.md %} - -### Add a computed column to an existing table - -{% include {{ page.version.version }}/computed-columns/add-computed-column.md %} - -For more information, see [`ADD COLUMN`](add-column.html). - -### Convert a computed column into a regular column - -{% include {{ page.version.version }}/computed-columns/convert-computed-column.md %} - -### Alter the formula for a computed column - -{% include {{ page.version.version }}/computed-columns/alter-computed-column.md %} - -## See also - -- [Scalar Expressions](scalar-expressions.html) -- [Information Schema](information-schema.html) -- [`CREATE TABLE`](create-table.html) -- [`JSONB`](jsonb.html) diff --git a/src/current/v22.1/configure-cockroachdb-kubernetes.md b/src/current/v22.1/configure-cockroachdb-kubernetes.md deleted file mode 100644 index 98668fb7980..00000000000 --- a/src/current/v22.1/configure-cockroachdb-kubernetes.md +++ /dev/null @@ -1,285 +0,0 @@ ---- -title: Resource management -summary: Allocate CPU, memory, and storage resources for a secure 3-node CockroachDB cluster on Kubernetes. -toc: true -toc_not_nested: true -secure: true -docs_area: deploy ---- - -This page explains how to configure Kubernetes cluster resources such as memory, CPU, and storage. - -These settings override the defaults used when [deploying CockroachDB on Kubernetes](deploy-cockroachdb-with-kubernetes.html). - -
    - - - -
    - -
    -{% include {{ page.version.version }}/orchestration/operator-check-namespace.md %} - -{{site.data.alerts.callout_success}} -If you [deployed CockroachDB on Red Hat OpenShift](deploy-cockroachdb-with-kubernetes-openshift.html), substitute `kubectl` with `oc` in the following commands. -{{site.data.alerts.end}} -
    - -On a production cluster, the resources you allocate to CockroachDB should be proportionate to your machine types and workload. We recommend that you determine and set these values before deploying the cluster, but you can also update the values on a running cluster. - -{{site.data.alerts.callout_success}} -Run `kubectl describe nodes` to see the available resources on the instances that you have provisioned. -{{site.data.alerts.end}} - -## Memory and CPU - -You can set the CPU and memory resources allocated to the CockroachDB container on each pod. - -{{site.data.alerts.callout_info}} -1 CPU in Kubernetes is equivalent to 1 vCPU or 1 hyperthread. For best practices on provisioning CPU and memory for CockroachDB, see the [Production Checklist](recommended-production-settings.html#hardware). -{{site.data.alerts.end}} - -
    -Specify CPU and memory values in `resources.requests` and `resources.limits` in the Operator's custom resource, which is used to [deploy the cluster](deploy-cockroachdb-with-kubernetes.html#initialize-the-cluster): - -~~~ yaml -spec: - resources: - requests: - cpu: "4" - memory: "16Gi" - limits: - cpu: "4" - memory: "16Gi" -~~~ - -{% include {{ page.version.version }}/orchestration/apply-custom-resource.md %} -
    - -
    -Specify CPU and memory values in `resources.requests` and `resources.limits` in the StatefulSet manifest you used to [deploy the cluster](deploy-cockroachdb-with-kubernetes.html?filters=manual#configure-the-cluster): - -~~~ yaml -spec: - template: - containers: - - name: cockroachdb - resources: - requests: - cpu: "4" - memory: "16Gi" - limits: - cpu: "4" - memory: "16Gi" -~~~ - -{% include {{ page.version.version }}/orchestration/apply-statefulset-manifest.md %} -
    - -
    -Specify CPU and memory values in `resources.requests` and `resources.limits` in the custom values file you created when [deploying the cluster](deploy-cockroachdb-with-kubernetes.html?filters=helm#step-2-start-cockroachdb): - -~~~ yaml -statefulset: - resources: - limits: - cpu: "4" - memory: "16Gi" - requests: - cpu: "4" - memory: "16Gi" -~~~ - -{% include {{ page.version.version }}/orchestration/apply-helm-values.md %} -
    - -We recommend using identical values for `resources.requests` and `resources.limits`. When setting the new values, note that not all of a pod's resources will be available to the CockroachDB container. This is because a fraction of the CPU and memory is reserved for Kubernetes. - -{{site.data.alerts.callout_info}} -If no resource limits are specified, the pods will be able to consume the maximum available CPUs and memory. However, to avoid overallocating resources when another memory-intensive workload is on the same instance, always set resource requests and limits explicitly. -{{site.data.alerts.end}} - -For more information on how Kubernetes handles resources, see the [Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/). - -
    -## Cache and SQL memory size - -Each CockroachDB node reserves a portion of its available memory for its cache and for storing temporary data for SQL queries. For more information on these settings, see the [Production Checklist](recommended-production-settings.html#cache-and-sql-memory-size). - -Our Kubernetes manifests dynamically set cache size and SQL memory size each to 1/4 (the recommended fraction) of the available memory, which depends on the memory request and limit you [specified](#memory-and-cpu) for your configuration. If you want to customize these values, set them explicitly. - -Specify `cache` and `maxSQLMemory` in the Operator's custom resource, which is used to [deploy the cluster](deploy-cockroachdb-with-kubernetes.html#initialize-the-cluster): - -~~~ yaml -spec: - cache: "4Gi" - maxSQLMemory: "4Gi" -~~~ - -{% include {{ page.version.version }}/orchestration/apply-custom-resource.md %} - -{{site.data.alerts.callout_info}} -Specifying these values is equivalent to using the `--cache` and `--max-sql-memory` flags with [`cockroach start`](cockroach-start.html#flags). -{{site.data.alerts.end}} -
    - -
    -## Cache and SQL memory size - -Each CockroachDB node reserves a portion of its available memory for its cache and for storing temporary data for SQL queries. For more information on these settings, see the [Production Checklist](recommended-production-settings.html#cache-and-sql-memory-size). - -Our Kubernetes manifests dynamically set cache size and SQL memory size each to 1/4 (the recommended fraction) of the available memory, which depends on the memory request and limit you [specified](#memory-and-cpu) for your configuration. If you want to customize these values, set them explicitly. - -Specify `cache` and `maxSQLMemory` in the custom values file you created when [deploying the cluster](deploy-cockroachdb-with-kubernetes.html?filters=helm#step-2-start-cockroachdb): - -~~~ yaml -conf: - cache: "4Gi" - max-sql-memory: "4Gi" -~~~ - -{% include {{ page.version.version }}/orchestration/apply-helm-values.md %} -
    - -## Persistent storage - -When you start your cluster, Kubernetes dynamically provisions and mounts a persistent volume into each pod. For more information on persistent volumes, see the [Kubernetes documentation](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). - -
    -The storage capacity of each volume is set in `pvc.spec.resources` in the Operator's custom resource, which is used to [deploy the cluster](deploy-cockroachdb-with-kubernetes.html#initialize-the-cluster): - -~~~ yaml -spec: - dataStore: - pvc: - spec: - resources: - limits: - storage: "60Gi" - requests: - storage: "60Gi" -~~~ -
    - -
    -The storage capacity of each volume is initially set in `volumeClaimTemplates.spec.resources` in the StatefulSet manifest you used to [deploy the cluster](deploy-cockroachdb-with-kubernetes.html?filters=manual#configure-the-cluster): - -~~~ yaml -volumeClaimTemplates: - spec: - resources: - requests: - storage: 100Gi -~~~ -
    - -
    -The storage capacity of each volume is initially set in the Helm chart's [values file](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/values.yaml): - -~~~ yaml -persistentVolume: - size: 100Gi -~~~ -
    - -You should provision an appropriate amount of disk storage for your workload. For recommendations on this, see the [Production Checklist](recommended-production-settings.html#storage). - -### Expand disk size - -If you discover that you need more capacity, you can expand the persistent volumes on a running cluster. Increasing disk size is often [beneficial for CockroachDB performance](kubernetes-performance.html#disk-size). - -
    -Specify a new volume size in `resources.requests` and `resources.limits` in the Operator's custom resource, which is used to [deploy the cluster](deploy-cockroachdb-with-kubernetes.html#initialize-the-cluster): - -~~~ yaml -spec: - dataStore: - pvc: - spec: - resources: - limits: - storage: "100Gi" - requests: - storage: "100Gi" -~~~ - -{% include {{ page.version.version }}/orchestration/apply-custom-resource.md %} - -The Operator updates the StatefulSet and triggers a rolling restart of the pods with the new storage capacity. - -To verify that the storage capacity has been updated, run `kubectl get pvc` to view the persistent volume claims (PVCs). It will take a few minutes before the PVCs are completely updated. -
    - -
    -{% include {{ page.version.version }}/orchestration/kubernetes-expand-disk-manual.md %} -
    - -
    -{% include {{ page.version.version }}/orchestration/kubernetes-expand-disk-helm.md %} -
    - -
    -## Network ports - -The Operator separates network traffic into three ports: - -| Protocol | Default | Description | Custom Resource Field | -|----------|---------|---------------------------------------------------------------------|-----------------------| -| gRPC | 26258 | Used for node connections | `grpcPort` | -| HTTP | 8080 | Used to [access the DB Console](ui-overview.html#db-console-access) | `httpPort` | -| SQL | 26257 | Used for SQL shell access | `sqlPort` | - -Specify alternate port numbers in the Operator's [custom resource](deploy-cockroachdb-with-kubernetes.html#initialize-the-cluster) (for example, to match the default port `5432` on PostgreSQL): - -~~~ yaml -spec: - sqlPort: 5432 -~~~ - -{% include {{ page.version.version }}/orchestration/apply-custom-resource.md %} - -The Operator updates the StatefulSet and triggers a rolling restart of the pods with the new port settings. - -{{site.data.alerts.callout_danger}} -Currently, only the pods are updated with new ports. To connect to the cluster, you need to ensure that the `public` service is also updated to use the new port. You can do this by deleting the service with `kubectl delete service {cluster-name}-public`. When service is recreated by the Operator, it will use the new port. This is a known limitation that will be fixed in an Operator update. -{{site.data.alerts.end}} - -## Ingress - -You can configure an [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) object to expose an internal HTTP or SQL [`ClusterIP` service](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) through a hostname. - -In order to use the Ingress resource, your cluster must be running an [Ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) for load balancing. This is **not** handled by the Operator and must be deployed separately. - -Specify Ingress objects in `ingress.ui` (HTTP) or `ingress.sql` (SQL) in the Operator's custom resource, which is used to [deploy the cluster](deploy-cockroachdb-with-kubernetes.html#initialize-the-cluster): - -~~~ yaml -spec: - ingress: - ui: - ingressClassName: nginx - annotations: - key: value - host: ui.example.com - sql: - ingressClassName: nginx - annotations: - key: value - host: sql.example.com -~~~ - -- `ingressClassName` specifies the [`IngressClass`](https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class) of the Ingress controller. This example uses the [nginx](https://kubernetes.github.io/ingress-nginx/) controller. - -- The `host` must be made publicly accessible. For example, create a route in [Amazon Route 53](https://aws.amazon.com/route53/), or add an entry to `/etc/hosts` that maps the IP address of the Ingress controller to the hostname. - - {{site.data.alerts.callout_info}} - Multiple hosts can be mapped to the same Ingress controller IP. - {{site.data.alerts.end}} - -- TCP connections for SQL clients must be enabled for the Ingress controller. For an example, see the [nginx documentation](https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/). - - {{site.data.alerts.callout_info}} - Changing the SQL Ingress `host` on a running deployment will cause a rolling restart of the cluster, due to new node certificates being generated for the SQL host. - {{site.data.alerts.end}} - -The [custom resource definition](https://github.com/cockroachdb/cockroach-operator/blob/v{{ latest_operator_version }}/config/crd/bases/crdb.cockroachlabs.com_crdbclusters.yaml) details the fields supported by the Operator. -
    diff --git a/src/current/v22.1/configure-logs.md b/src/current/v22.1/configure-logs.md deleted file mode 100644 index 526b236eff9..00000000000 --- a/src/current/v22.1/configure-logs.md +++ /dev/null @@ -1,648 +0,0 @@ ---- -title: Configure logs -summary: How to configure CockroachDB logs with the --log or --log-config-file flag and YAML payload. -toc: true -docs_area: manage ---- - -This page describes how to configure CockroachDB logs with the [`--log` or `log-config-file` flag](cockroach-start.html#logging) and a [YAML payload](#yaml-payload). Most logging behaviors are configurable, including: - -- The [log sinks](#configure-log-sinks) that output logs to different locations, including over the network. -- The [logging channels](logging-overview.html#logging-channels) that are mapped to each sink. -- The [format](log-formats.html) used by the log messages. -- The [redaction](#redact-logs) of log messages. - -For examples of how these settings can be used in practice, see [Logging Use Cases](logging-use-cases.html). - -## Flag - -To configure the logging behavior of a `cockroach` command, include one of these flags with the command: - -- `--log={yaml}`, where `{yaml}` is the [YAML payload](#yaml-payload) -- `--log-config-file={yaml-file}`, where `{yaml-file}` is the path to a YAML file - -To disable logging, set `--log-dir` to a blank directory (`--log-dir=`) instead of using one of the other logging flags. Do not use `--log-dir=""`; this creates a new directory named `""` and stores log files in that directory. - -{{site.data.alerts.callout_success}} -All [`cockroach` commands](cockroach-commands.html) support logging and can be configured with `--log` or `--log-config-file`. However, note that most messages related to cluster operation are generated by [`cockroach start`](cockroach-start.html) or [`cockroach start-single-node`](cockroach-start-single-node.html). Other commands generate messages related to their own execution, which are mainly useful when troubleshooting the behaviors of those commands. -{{site.data.alerts.end}} - -## YAML payload - -All log settings for a `cockroach` command are specified with a YAML payload in one of the following formats: - -- Block format, where each parameter is written on a separate line. For example, after creating a file `logs.yaml`, pass the YAML values with either `--log-config-file` or `--log`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start-single-node --certs-dir=certs --log-config-file=logs.yaml - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start-single-node --certs-dir=certs --log="$(cat logs.yaml)" - ~~~ - -- Inline format, where all parameters are specified on one line. For example, to generate an `ops` log file that collects the `OPS` and `HEALTH` channels (overriding the file groups defined for those channels in the [default configuration](#default-logging-configuration)): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start-single-node --certs-dir=certs --log="sinks: {file-groups: {ops: {channels: [OPS, HEALTH]}}}" - ~~~ - - Note that the inline spaces must be preserved. - -For clarity, this article uses the block format to describe the YAML payload, which has the overall structure: - -~~~ yaml -file-defaults: ... # defaults inherited by file sinks -fluent-defaults: ... # defaults inherited by Fluentd sinks -http-defaults: ... # defaults inherited by HTTP sinks -sinks: - file-groups: ... # file sink definitions - fluent-servers: ... # Fluentd sink definitions - http-servers: ... # HTTP sink definitions - stderr: ... # stderr sink definitions -capture-stray-errors: ... # parameters for the stray error capture system -~~~ - -{{site.data.alerts.callout_info}} -Providing a logging configuration is optional. Any fields included in the YAML payload will override the same fields in the [default logging configuration](#default-logging-configuration). -{{site.data.alerts.end}} - -{{site.data.alerts.callout_success}} -You can view your current settings by running `cockroach debug check-log-config`, which returns the YAML definitions and a URL to a visualization of the current logging configuration. -{{site.data.alerts.end}} - -## Configure log sinks - -Log *sinks* route events from specified [logging channels](logging-overview.html#logging-channels) to destinations outside CockroachDB. These destinations currently include [log files](#output-to-files), [Fluentd](https://www.fluentd.org/)-compatible [servers](#output-to-fluentd-compatible-network-collectors), [HTTP servers](#output-to-http-network-collectors), and the [standard error stream (`stderr`)](#output-to-stderr). - -All supported output destinations are configured under `sinks`: - -~~~ yaml -file-defaults: ... -fluent-defaults: ... -http-defaults: ... -sinks: - file-groups: - {file group name}: - channels: {channels} - ... - fluent-servers: - {server name}: - channels: {channels} - ... - http-servers: - {server name}: - channels: {channels} - ... - stderr: - channels: {channels} - ... -~~~ - - - -All supported sink types use the following common sink parameters: - -| Parameter | Description | -|-----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `filter` | Minimum severity level at which logs enter the channels selected for the sink. Accepts one of the valid [severity levels](logging.html#logging-levels-severities) or `NONE`, which excludes all messages from the sink output. For details, see [Set logging levels](#set-logging-levels). | -| `format` | Log message format to use for file or network sinks. Accepts one of the valid [log formats](log-formats.html). For details, see [file logging format](#file-logging-format), [Fluentd logging format](#fluentd-logging-format), and [HTTP logging format](#http-logging-format). | -| `redact` | When `true`, enables automatic redaction of personally identifiable information (PII) from log messages. This ensures that sensitive data is not transmitted when collecting logs centrally or over a network. For details, see [Redact logs](#redact-logs). | -| `redactable` | When `true`, preserves redaction markers around fields that are considered sensitive in the log messages. The markers are recognized by [`cockroach debug zip`](cockroach-debug-zip.html) and [`cockroach debug merge-logs`](cockroach-debug-merge-logs.html) but may not be compatible with external log collectors. For details on how the markers appear in each format, see [Log formats](log-formats.html). | -| `exit-on-error` | When `true`, stops the Cockroach node if an error is encountered while writing to the sink. We recommend enabling this option on file sinks in order to avoid losing any log entries. When set to `false`, this can be used to mark certain sinks (such as `stderr`) as non-critical. | -| `auditable` | If `true`, enables `exit-on-error` on the sink. Also disables `buffered-writes` if the sink is under `file-groups`. This guarantees [non-repudiability](https://en.wikipedia.org/wiki/Non-repudiation) for any logs in the sink, but can incur a performance overhead and higher disk IOPS consumption. This setting is typically enabled for [security-related logs](logging-use-cases.html#security-and-audit-monitoring). | - -If not specified for a given sink, these parameter values are inherited from [`file-defaults`](#set-file-defaults) (for file sinks), [`fluent-defaults`](#set-fluentd-defaults) (for Fluentd sinks), and [`http-defaults`](#set-http-defaults) (for HTTP sinks). - -### Output to files - -CockroachDB can write messages to one or more log files. - -`file-groups` specifies the channels that output to each log file, along with its output directory and other configuration details. For example: - -~~~ yaml -file-defaults: ... -fluent-defaults: ... -http-defaults: ... -sinks: - file-groups: - default: - channels: [DEV] - health: - channels: [HEALTH] - dir: health-logs - ... -~~~ - -{{site.data.alerts.callout_success}} -A file group name is arbitrary and is used to name the log files. The `default` file group is an exception. For details, see [Log file naming](#log-file-naming). -{{site.data.alerts.end}} - -Along with the [common sink parameters](#common-sink-parameters), each file group accepts the following parameters: - -| Parameter | Description | -|--------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `channels` | List of channels that output to this sink. Use a YAML array or string of [channel names](logging-overview.html#logging-channels), `ALL` to include all channels, or `ALL EXCEPT {channels}` to include all channels except the specified channel names.

    For more details on acceptable syntax, see [Logging channel selection](#logging-channel-selection). | -| `dir` | Output directory for log files generated by this sink. | -| `max-file-size` | Approximate maximum size of individual files generated by this sink. | -| `max-group-size` | Approximate maximum combined size of all files to be preserved for this sink. An asynchronous garbage collection removes files that cause the file set to grow beyond this specified size. For high-traffic deployments, or to ensure log retention over longer periods of time, consider raising this value to `500MiB` or `1GiB`.

    **Default:** `100MiB` | -| `file-permissions` | The `chmod`-style permissions on generated log files, formatted as a 3-digit octal number. The executable bit must not be set.

    **Default:** `640` (readable by the owner or members of the group, writable by the owner). | -| `buffered-writes` | When `true`, enables buffering of writes. Set to `false` to flush every log entry (i.e., propagate data from the `cockroach` process to the OS) and synchronize writes (i.e., ask the OS to confirm the log data was written to disk). Disabling this setting provides [non-repudiation](https://en.wikipedia.org/wiki/Non-repudiation) guarantees, but can incur a performance overhead and higher disk IOPS consumption. This setting is typically disabled for [security-related logs](logging-use-cases.html#security-and-audit-monitoring). | - -If not specified for a given file group, the parameter values are inherited from [`file-defaults`](#configure-logging-defaults) (except `channels`, which uses the [default configuration](#default-logging-configuration)). - -#### Log file naming - -Log files are named in the following format: - -~~~ -{process}-{file group}.{host}.{user}.{start timestamp in UTC}.{process ID}.log -~~~ - -For example, a file group `health` will generate a log file that looks like this: - -~~~ -cockroach-health.work-laptop.worker.2021-03-15T15_24_10Z.024338.log -~~~ - -For each file group, a symlink points to the latest generated log file. It's easiest to refer to the symlink. For example: - -~~~ -cockroach-health.log -~~~ - -{{site.data.alerts.callout_info}} -The files generated for a group named `default` are named after the pattern `cockroach.{metadata}.log`. -{{site.data.alerts.end}} - -#### Access in DB Console - -{{site.data.alerts.callout_success}} -{% include {{ page.version.version }}/ui/ui-log-files.md %} -{{site.data.alerts.end}} - -#### Known limitations - -Log files can only be accessed in the DB Console if they are stored in the same directory as the file sink for the `DEV` channel. - -### Output to Fluentd-compatible network collectors - -CockroachDB can send logs over the network to a [Fluentd](https://www.fluentd.org/)-compatible log collector (e.g., [Elasticsearch](https://www.elastic.co/elastic-stack), [Splunk](https://www.splunk.com/)). `fluent-servers` specifies the channels that output to a server, along with the server configuration details. For example: - -~~~ yaml -file-defaults: ... -fluent-defaults: ... -http-defaults: ... -sinks: - fluent-servers: - health: - channels: [HEALTH] - address: 127.0.0.1:5170 - ... -~~~ - -{{site.data.alerts.callout_info}} -A Fluentd sink can be listed more than once with different `address` values. This routes the same logs to different Fluentd servers. -{{site.data.alerts.end}} - -Along with the [common sink parameters](#common-sink-parameters), each Fluentd server accepts the following parameters: - -| Parameter | Description | -|-----------|--------------------------------------------------------------------------------------------------------------------| -| `channels` | List of channels that output to this sink. Use a YAML array or string of [channel names](logging-overview.html#logging-channels), `ALL` to include all channels, or `ALL EXCEPT {channels}` to include all channels except the specified channel names.

    For more details on acceptable syntax, see [Logging channel selection](#logging-channel-selection). | -| `address` | Network address and port of the log collector. | -| `net` | Network protocol to use. Can be `tcp`, `tcp4`, `tcp6`, `udp`, `udp4`, `udp6`, or `unix`.

    **Default:** `tcp` | - -A Fluentd sink buffers at most one log entry and retries sending the event at most one time if a network error is encountered. This is just sufficient to tolerate a restart of the Fluentd collector after a configuration change under light logging activity. If the server is unavailable for too long, or if more than one error is encountered, an error is reported to the process's standard error output with a copy of the logging event, and the logging event is dropped. - -For an example network logging configuration, see [Logging use cases](logging-use-cases.html#network-logging). - -### Output to HTTP network collectors - - CockroachDB can send logs over the network to an HTTP server. `http-servers` specifies the channels that output to a server, along with the server configuration details. For example: - -~~~ yaml -file-defaults: ... -fluent-defaults: ... -http-defaults: ... -sinks: - http-servers: - health: - channels: [HEALTH] - address: 127.0.0.1:5170 - method: POST - unsafe-tls: false - timeout: 2s - disable-keep-alives: false - ... -~~~ - -{{site.data.alerts.callout_info}} -An HTTP sink can be listed more than once with different `address` values. This routes the same logs to different HTTP servers. -{{site.data.alerts.end}} - -Along with the [common sink parameters](#common-sink-parameters), each HTTP server accepts the following parameters: - -| Parameter | Description | -|-----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `channels` | List of channels that output to this sink. Use a YAML array or string of [channel names](logging-overview.html#logging-channels), `ALL` to include all channels, or `ALL EXCEPT {channels}` to include all channels except the specified channel names.

    For more details on acceptable syntax, see [Logging channel selection](#logging-channel-selection). | -| `address` | Network address and port of the log collector. | -| `method` | HTTP method to use. Can be `GET` or `POST`.

    **Default:** `POST` | -| `unsafe-tls` | When `true`, bypasses TLS server validation.

    **Default:** `false` | -| `timeout` | Timeout before requests are abandoned.

    **Default:** `0` (no timeout) | -| `disable-keep-alives` | When `true`, disallows reuse of the server connection across requests.

    **Default:** `false` (reuses connections) | - -An HTTP sink buffers at most one log entry and retries sending the event at most one time if a network error is encountered. This is just sufficient to tolerate a restart of the HTTP collector after a configuration change under light logging activity. If the server is unavailable for too long, or if more than one error is encountered, an error is reported to the process's standard error output with a copy of the logging event, and the logging event is dropped. - -For an example network logging configuration, see [Logging use cases](logging-use-cases.html#network-logging). - -### Output to `stderr` - -CockroachDB can output messages to the [standard error stream (`stderr`)](https://en.wikipedia.org/wiki/Standard_streams#Standard_error_(stderr)), which prints them to the machine's terminal but does not store them. `stderr` specifies the channels that output to the stream. For example: - -~~~ yaml -file-defaults: ... -fluent-defaults: ... -http-defaults: ... -sinks: - stderr: - channels: [DEV] -~~~ - -Along with the [common sink parameters](#common-sink-parameters), `stderr` accepts the following parameters: - -{{site.data.alerts.callout_info}} -The `format` parameter for `stderr` is set to [`crdb-v2-tty`](log-formats.html#format-crdb-v2-tty) and cannot be changed. -{{site.data.alerts.end}} - -| Parameter | Description | -|------------|-------------------------------------------------------------------| -| `channels` | List of channels that output to this sink. Use a YAML array or string of [channel names](logging-overview.html#logging-channels), `ALL` to include all channels, or `ALL EXCEPT {channels}` to include all channels except the specified channel names.

    For more details on acceptable syntax, see [Logging channel selection](#logging-channel-selection). | -| `no-color` | When `true`, removes terminal color [escape codes](https://en.wikipedia.org/wiki/ANSI_escape_code) from the output. | - -Because server start-up messages are always emitted at the start of the standard error stream, it is generally difficult to automate integration of `stderr` with log analyzers. We recommend using [file logging](#output-to-files) or network logging via [Fluentd](#output-to-fluentd-compatible-network-collectors) or [HTTP](#output-to-http-network-collectors) instead of `stderr` when integrating with automated monitoring software. - -{{site.data.alerts.callout_info}} -By default, `cockroach start` and `cockroach start-single-node` do not print any messages to `stderr`. However, if the `cockroach` process does not have access to on-disk storage, all messages are printed to `stderr`. -{{site.data.alerts.end}} - -### Logging channel selection - -For each sink, multiple channels can be selected. Note that: - -- Spacing between items will vary according to the syntax used. -- Channel selection is case-insensitive. - -These selections are equivalent: - -~~~ yaml -# Select OPS and HEALTH. -channels: [OPS, HEALTH] -channels: 'OPS, HEALTH' -channels: OPS,HEALTH -channels: -- OPS -- HEALTH -~~~ - -By default, the severity level set by `filter` in the [sink configuration](#common-sink-parameters) is used. However, you can specify a different severity level for each channel. For more information on severity levels, see [Set logging levels](#set-logging-levels). - -These selections are equivalent: - -~~~ yaml -# Select PERF at severity INFO, and HEALTH and OPS at severity WARNING. -channels: {INFO: [PERF], WARNING: [HEALTH, OPS]} -channels: - INFO: - - PERF - WARNING: - - OPS - - HEALTH -~~~ - -Brackets are optional when selecting a single channel: - -~~~ yaml -channels: OPS -channels: {INFO: PERF} -~~~ - -To select all channels, using the `all` keyword: - -~~~ yaml -channels: all -channels: 'all' -channels: [all] -channels: ['all'] -~~~ - -To select all channels except for a subset, using the `all except` keyword prefix: - -~~~ yaml -channels: all except ops,health -channels: all except [ops,health] -channels: 'all except ops, health' -channels: 'all except [ops, health]' -~~~ - -## Configure logging defaults - -When setting up a logging configuration, it's simplest to define shared parameters in `file-defaults` and `fluent-defaults` and override specific values as needed in [`file-groups`](#output-to-files), [`fluent-servers`](#output-to-fluentd-compatible-network-collectors), [`http-servers`](#output-to-http-network-collectors), and [`stderr`](#output-to-stderr). For a complete example, see the [default configuration](#default-logging-configuration). - -{{site.data.alerts.callout_success}} -You can view your current settings by running `cockroach debug check-log-config`, which returns the YAML definitions and a URL to a visualization of the current logging configuration. -{{site.data.alerts.end}} - -This section describes some recommended defaults. - -### Set file defaults - -Defaults for log files are set in `file-defaults`, which accepts all [common sink parameters](#common-sink-parameters) and the [file group parameters](#output-to-files) `dir`, `max-file-size`, `max-group-size`, `file-permissions`, and `buffered-writes`. - -#### Logging directory - -By default, CockroachDB adds log files to a `logs` subdirectory in the first on-disk [`store` directory](cockroach-start.html#store) (default: `cockroach-data`): - -~~~ -cockroach-data/logs -~~~ - -{{site.data.alerts.callout_success}} -Each Cockroach node generates log files in the directory specified by its logging configuration. These logs detail the internal activity of that node without visibility into the behavior of other nodes. When troubleshooting, it's best to refer to the output directory for the cluster log files, which collect the messages from all active nodes. -{{site.data.alerts.end}} - -In cloud deployments, the [main data store](cockroach-start.html#store) will be subject to an IOPS budget. Adding logs to the store directory will excessively consume IOPS. For this reason, cloud deployments should output log files to a separate directory with fewer IOPS restrictions. - -You can override the default logging directory like this: - -~~~ yaml -file-defaults: - dir: /custom/dir/path/ -~~~ - -#### File logging format - -The default message format for log files is [`crdb-v2`](log-formats.html#format-crdb-v2). Each `crdb-v2` log message starts with a flat prefix that contains event metadata (e.g., severity, date, timestamp, channel), followed by the event payload. For details on the metadata, see [Log formats](log-formats.html#format-crdb-v2). - -If you plan to read logs programmatically, you can switch to a [JSON](log-formats.html#format-json) or [compact JSON](log-formats.html#format-json-compact) format: - -~~~ yaml -file-defaults: - format: json -~~~ - -{{site.data.alerts.callout_info}} -`format` refers to the envelope of the log message, which contains the event metadata. This is separate from the event payload, which corresponds to its [event type](eventlog.html). -{{site.data.alerts.end}} - -### Set Fluentd defaults - -Defaults for Fluentd-compatible network sinks are set in `fluent-defaults`, which accepts all [common sink parameters](#common-sink-parameters). - -Note that the [server parameters](#output-to-fluentd-compatible-network-collectors) `address` and `net` are *not* specified in `fluent-defaults`: - -- `address` must be specified for each sink under `fluent-servers`. -- `net` is not required and defaults to `tcp`. - -#### Fluentd logging format - -The default message format for network output is [`json-fluent-compact`](log-formats.html#format-json-fluent-compact). Each log message is structured as a JSON payload that can be read programmatically. The `json-fluent-compact` and [`json-fluent`](log-formats.html#format-json-fluent) formats include a `tag` field that is required by the [Fluentd protocol](https://docs.fluentd.org/configuration/config-file). For details, see [Log formats](log-formats.html#format-json-fluent-compact). - -~~~ yaml -fluent-defaults: - format: json-fluent -~~~ - -{{site.data.alerts.callout_info}} -`format` refers to the envelope of the log message. This is separate from the event payload, which is structured according to [event type](eventlog.html). -{{site.data.alerts.end}} - -### Set HTTP defaults - -Defaults for HTTP sinks are set in `http-defaults`, which accepts all [common sink parameters](#common-sink-parameters). - -Note that the [server parameters](#output-to-http-network-collectors) `address` and `method` are *not* specified in `fluent-defaults`: - -- `address` must be specified for each sink under `http-servers`. -- `method` is not required and defaults to `POST`. - -#### HTTP logging format - -The default message format for HTTP output is [`json-compact`](log-formats.html#format-json-compact). Each log message is structured as a JSON payload that can be read programmatically. For details, see [Log formats](log-formats.html#format-json-compact). - -~~~ yaml -http-defaults: - format: json -~~~ - -{{site.data.alerts.callout_info}} -`format` refers to the envelope of the log message. This is separate from the event payload, which is structured according to [event type](eventlog.html). -{{site.data.alerts.end}} - -### Set logging levels - -Log messages are associated with a [severity level](logging.html#logging-levels-severities) when they are generated. - -Each logging sink accepts messages from each logging channel at a minimum severity level. This minimum severity level can be specified [per sink](#common-sink-parameters) or by default using the `filter` attribute. - -Messages with severity levels below the configured threshold do not enter logging channels and are discarded. - -The [default configuration](#default-logging-configuration) uses the following severity levels for [`cockroach start`](cockroach-start.html) and [`cockroach start-single-node`](cockroach-start-single-node.html): - -- `file-defaults`, `fluent-defaults`, and `http-defaults` each use `filter: INFO`. Since `INFO` is the lowest severity level, file and network sinks will emit all log messages. -- `stderr` uses `filter: NONE` and does not emit log messages. -- The `default` file group uses `filter: INFO` for events from the `DEV` and `OPS` channels, and `filter: WARNING` for all others. - -{{site.data.alerts.callout_info}} -All other `cockroach` commands use `filter: WARNING` and log to `stderr` by default, with these exceptions: - -- [`cockroach workload`](cockroach-sql.html#logging) uses `filter: INFO`. -- [`cockroach demo`](cockroach-demo.html#logging) uses `filter: NONE` (discards all log messages). -{{site.data.alerts.end}} - -You can override the `file-defaults`, `fluent-defaults`, and `http-defaults` severity levels on a per-sink basis. - -For example, this will include `DEV` events at severity `WARNING`: - -~~~ yaml -sinks: - file-groups: - dev: - channels: DEV - filter: WARNING -~~~ - -You can also override the `filter` attribute and set severity levels on a [per-channel](#logging-channel-selection) basis. - -For example, this will include `DEV` events at severity `INFO`, and `OPS` events at severity `ERROR`: - -~~~ yaml -sinks: - file-groups: - dev: - channels: {INFO: DEV, ERROR: OPS} -~~~ - -### Redact logs - -CockroachDB can redact personally identifiable information (PII) from log messages. The logging system includes two parameters that handle this differently: - -- `redact` is disabled by default. When enabled, `redact` automatically redacts sensitive data from logging output. We do *not* recommend enabling this on the `DEV` channel because it impairs our ability to troubleshoot problems. -- `redactable` is enabled by default. This places redaction markers around sensitive fields in log messages. These markers are recognized by [`cockroach debug zip`](cockroach-debug-zip.html) and [`cockroach debug merge-logs`](cockroach-debug-merge-logs.html), which aggregate CockroachDB log files and can be instructed to redact sensitive data from their output. - -When collecting logs centrally (e.g., in data mining scenarios where non-privileged users have access to logs) or over a network (e.g., to an external log collector), it's safest to enable `redact`: - -~~~ yaml -file-defaults: - redact: true -fluent-defaults: - redact: true -http-defaults: - redact: true -~~~ - -{{site.data.alerts.callout_success}} -In addition, the `DEV` channel should be output to a separate logging directory, since it is likely to contain sensitive data. See [`DEV` channel](#dev-channel). -{{site.data.alerts.end}} - -External log collectors can misinterpret the `cockroach debug` redaction markers, since they are specific to CockroachDB. To prevent this issue when using network sinks, disable `redactable`: - -~~~ yaml -fluent-defaults: - redactable: false -~~~ - -### DEV channel - -The `DEV` channel is used for debug and uncategorized messages. It can therefore be noisy and contain sensitive (PII) information. - -We recommend configuring `DEV` separately from the other logging channels. When sending logs to a [Fluentd-compatible](#output-to-fluentd-compatible-network-collectors) or [HTTP](#output-to-http-network-collectors) network collector, `DEV` logs should also be excluded from network collection. - -In this example, the `dev` file group is reserved for `DEV` logs. These are output to a `cockroach-dev.log` file in a custom disk `dir`: - -~~~ yaml -file-defaults: ... -fluent-defaults: ... -sinks: - file-groups: - dev: - channels: [DEV] - dir: /custom/dir/path/ - ... -~~~ - -{{site.data.alerts.callout_success}} -To ensure that you are protecting sensitive information, also [redact your logs](#redact-logs). -{{site.data.alerts.end}} - -## Stray error capture - -Certain events, such as uncaught software exceptions (panics), bypass the CockroachDB logging system. However, they can be useful in troubleshooting. For example, if CockroachDB crashes, it normally logs a stack trace to what caused the problem. - -To ensure that these stray errors can be tracked, CockroachDB does not send them to `stderr` by default. Instead, stray errors are output to a `cockroach-stderr.log` file in the default [logging directory](#logging-directory). - -You can change these settings in `capture-stray-errors`: - -~~~ yaml -file-defaults: ... -fluent-defaults: ... -sinks: ... -capture-stray-errors: - enable: true - dir: /custom/dir/path/ -~~~ - -{{site.data.alerts.callout_info}} -When `capture-stray-errors` is disabled, [`redactable`](#redact-logs) cannot be enabled on the `stderr` sink. This is because `stderr` will contain both stray errors and logged events and cannot apply redaction markers in a reliable way. Note that [`redact`](#redact-logs) can still be enabled on `stderr` in this case. -{{site.data.alerts.end}} - -## Default logging configuration - -The YAML payload below represents the default logging behavior of [`cockroach start`](cockroach-start.html) and [`cockroach start-single-node`](cockroach-start-single-node.html). - -~~~ yaml -file-defaults: - max-file-size: 10MiB - max-group-size: 100MiB - file-permissions: 644 - buffered-writes: true - filter: INFO - format: crdb-v2 - redact: false - redactable: true - exit-on-error: true - auditable: false -fluent-defaults: - filter: INFO - format: json-fluent-compact - redact: false - redactable: true - exit-on-error: false - auditable: false -http-defaults: - method: POST - unsafe-tls: false - timeout: 0s - disable-keep-alives: false - filter: INFO - format: json-compact - redact: false - redactable: true - exit-on-error: false - auditable: false -sinks: - file-groups: - default: - channels: - INFO: [DEV, OPS] - WARNING: all except [DEV, OPS] - health: - channels: [HEALTH] - pebble: - channels: [STORAGE] - security: - channels: [PRIVILEGES, USER_ADMIN] - auditable: true - sql-audit: - channels: [SENSITIVE_ACCESS] - auditable: true - sql-auth: - channels: [SESSIONS] - auditable: true - sql-exec: - channels: [SQL_EXEC] - sql-slow: - channels: [SQL_PERF] - sql-slow-internal-only: - channels: [SQL_INTERNAL_PERF] - telemetry: - channels: [TELEMETRY] - max-file-size: 100KiB - max-group-size: 1.0MiB - stderr: - channels: all - filter: NONE - redact: false - redactable: true - exit-on-error: true -capture-stray-errors: - enable: true - max-group-size: 100MiB -~~~ - -{{site.data.alerts.callout_info}} -For high-traffic deployments that [output log messages to files](#output-to-files), consider raising `file-defaults: {max-group-size}` to `500MiB` or `1GiB` to extend log retention. -{{site.data.alerts.end}} - -Note that a default `dir` is not specified for `file-defaults` and `capture-stray-errors`: - -- The default `dir` for `file-defaults` is inferred from the first on-disk [`store` directory](cockroach-start.html#store). See [Logging directory](#logging-directory). -- The default `dir` for `capture-stray-errors` is inherited form `file-defaults`. - -## See also - -- [Logging Use Cases](logging-use-cases.html) -- [Log Formats](log-formats.html) diff --git a/src/current/v22.1/configure-replication-zones.md b/src/current/v22.1/configure-replication-zones.md deleted file mode 100644 index 5e10d2402ba..00000000000 --- a/src/current/v22.1/configure-replication-zones.md +++ /dev/null @@ -1,696 +0,0 @@ ---- -title: Configure Replication Zones -summary: In CockroachDB, you use replication zones to control the number and location of replicas for specific sets of data. -keywords: ttl, time to live, availability zone -toc: true -docs_area: manage ---- - -Replication zones give you the power to control what data goes where in your CockroachDB cluster. Specifically, they are used to control the number and location of replicas for data belonging to the following objects: - -- Databases -- Tables -- Rows ([Enterprise-only](enterprise-licensing.html)) -- Indexes ([Enterprise-only](enterprise-licensing.html)) -- All data in the cluster, including internal system data ([via the default replication zone](#view-the-default-replication-zone)) - -For each of these objects you can control: - -- How many copies of each range to spread through the cluster. -- Which constraints are applied to which data, e.g., "table X's data can only be stored in the German availability zones". -- The maximum size of ranges (how big ranges get before they are split). -- How long old data is kept before being garbage collected. -- Where you would like the leaseholders for certain ranges to be located, e.g., "for ranges that are already constrained to have at least one replica in `region=us-west`, also try to put their leaseholders in `region=us-west`". - -This page explains how replication zones work and how to use the [`CONFIGURE ZONE`](configure-zone.html) statement to manage them. - -{{site.data.alerts.callout_info}} -To configure replication zones, a user must be a member of the [`admin` role](security-reference/authorization.html#admin-role) or have been granted [`CREATE`](security-reference/authorization.html#supported-privileges) or [`ZONECONFIG`](security-reference/authorization.html#supported-privileges) privileges. To configure [`system` objects](#for-system-data), the user must be a member of the `admin` role. -{{site.data.alerts.end}} - -## Overview - -Every [range](architecture/overview.html#architecture-range) in the cluster is part of a replication zone. Each range's zone configuration is taken into account as ranges are rebalanced across the cluster to ensure that any constraints are honored. - -When a cluster starts, there are two categories of replication zone: - -1. Pre-configured replication zones that apply to internal system data. -2. A single default replication zone that applies to the rest of the cluster. - -You can adjust these pre-configured zones as well as add zones for individual databases, tables, rows, and secondary indexes as needed. Note that adding zones for rows and secondary indexes is [Enterprise-only](enterprise-licensing.html). - -For example, you might rely on the [default zone](#view-the-default-replication-zone) to spread most of a cluster's data across all of your availability zones, but [create a custom replication zone for a specific database](#create-a-replication-zone-for-a-database) to make sure its data is only stored in certain availability zones and/or geographies. - -## Replication zone levels - -There are five replication zone levels for [**table data**](architecture/distribution-layer.html#table-data) in a cluster, listed from least to most granular: - -Level | Description -------|------------ -Cluster | CockroachDB comes with a pre-configured `default` replication zone that applies to all table data in the cluster not constrained by a database, table, or row-specific replication zone. This zone can be adjusted but not removed. See [View the Default Replication Zone](#view-the-default-replication-zone) and [Edit the Default Replication Zone](#edit-the-default-replication-zone) for more details. -Database | You can add replication zones for specific databases. See [Create a Replication Zone for a Database](#create-a-replication-zone-for-a-database) for more details. -Table | You can add replication zones for specific tables. See [Create a Replication Zone for a Table](#create-a-replication-zone-for-a-table). -Index ([Enterprise-only](enterprise-licensing.html)) | The [secondary indexes](indexes.html) on a table will automatically use the replication zone for the table. However, with an Enterprise license, you can add distinct replication zones for secondary indexes. See [Create a Replication Zone for a Secondary Index](#create-a-replication-zone-for-a-secondary-index) for more details. -Row ([Enterprise-only](enterprise-licensing.html)) | You can add replication zones for specific rows in a table or secondary index by [defining table partitions](partitioning.html). See [Create a Replication Zone for a Table Partition](#create-a-replication-zone-for-a-partition) for more details. - -### For system data - -In addition, CockroachDB stores internal [**system data**](architecture/distribution-layer.html#monolithic-sorted-map-structure) in what are called system ranges. There are two replication zone levels for this internal system data, listed from least to most granular: - -Level | Description -------|------------ -Cluster | The `default` replication zone mentioned above also applies to all system ranges not constrained by a more specific replication zone. -System Range | CockroachDB comes with pre-configured replication zones for important system ranges, such as the "meta" and "liveness" ranges. If necessary, you can add replication zones for the "timeseries" range and other system ranges as well. Editing replication zones for system ranges may override settings from `default`. See [Create a Replication Zone for a System Range](#create-a-replication-zone-for-a-system-range) for more details.

    CockroachDB also comes with pre-configured replication zones for the internal `system` database and the `system.jobs` table, which stores metadata about long-running jobs such as schema changes and backups. - -### Level priorities - -When replicating data, whether table or system, CockroachDB always uses the most granular replication zone available. For example, for a piece of user data: - -1. If there's a replication zone for the row, CockroachDB uses it. -2. If there's no applicable row replication zone and the row is from a secondary index, CockroachDB uses the secondary index replication zone. -3. If the row isn't from a secondary index or there is no applicable secondary index replication zone, CockroachDB uses the table replication zone. -4. If there's no applicable table replication zone, CockroachDB uses the database replication zone. -5. If there's no applicable database replication zone, CockroachDB uses the `default` cluster-wide replication zone. - -## Manage replication zones - -Use the [`CONFIGURE ZONE`](configure-zone.html) statement to [add](#create-a-replication-zone-for-a-system-range), [modify](#edit-the-default-replication-zone), [reset](#reset-a-replication-zone), and [remove](#remove-a-replication-zone) replication zones. - -### Replication zone variables - -Use the [`ALTER ... CONFIGURE ZONE`](configure-zone.html) [statement](sql-statements.html) to set a replication zone: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE t CONFIGURE ZONE USING range_min_bytes = 0, range_max_bytes = 90000, gc.ttlseconds = 89999, num_replicas = 5, constraints = '[-region=west]'; -~~~ - -{% include {{page.version.version}}/zone-configs/variables.md %} - -### Replication constraints - -The location of replicas, both when they are first added and when they are rebalanced to maintain cluster equilibrium, is based on the interplay between descriptive attributes assigned to nodes and constraints set in zone configurations. - -{{site.data.alerts.callout_success}}For demonstrations of how to set node attributes and replication constraints in different scenarios, see Scenario-based Examples below.{{site.data.alerts.end}} - -#### Descriptive attributes assigned to nodes - -When starting a node with the [`cockroach start`](cockroach-start.html) command, you can assign the following types of descriptive attributes: - -{% capture locality_case_sensitive_example %}--locality datacenter=us-east-1 --locality datacenter=datacenter=US-EAST-1{% endcapture %} - -Attribute Type | Description ----------------|------------ -**Node Locality** | Using the [`--locality`](cockroach-start.html#locality) flag, you can assign arbitrary key-value pairs that describe the location of the node. Locality might include region, country, availability zone, etc. The key-value pairs should be ordered into _locality tiers_ that range from most inclusive to least inclusive (e.g., region before availability zone as in `region=eu,az=paris`), and the keys and the order of key-value pairs must be the same on all nodes. It's typically better to include more pairs than fewer. For example:

    `--locality=region=east,az=us-east-1`
    `--locality=region=east,az=us-east-2`
    `--locality=region=west,az=us-west-1`

    CockroachDB attempts to spread replicas evenly across the cluster based on locality, with the order of locality tiers determining the priority. Locality can also be used to influence the location of data replicas in various ways using replication zones.

    When there is high latency between nodes, CockroachDB uses locality to move range leases closer to the current workload, reducing network round trips and improving read performance. See [Follow-the-workload](topology-follow-the-workload.html) for more details.

    **Note**: Repeating an exact locality value has no effect, but locality values are case-sensitive. For example, from the point of view of CockroachDB, the following values result in two separate localities:

    {{locality_case_sensitive_example}}

    This type of configuration error can lead to issues that are difficult to diagnose. -**Node Capability** | Using the `--attrs` flag, you can specify node capability, which might include specialized hardware or number of cores, for example:

    `--attrs=ram:64gb` -**Store Type/Capability** | Using the `attrs` field of the `--store` flag, you can specify disk type or capability, for example:

    `--store=path=/mnt/ssd01,attrs=ssd`
    `--store=path=/mnt/hda1,attrs=hdd:7200rpm` - -#### Types of constraints - -The node-level and store-level descriptive attributes mentioned above can be used as the following types of constraints in replication zones to influence the location of replicas. However, note the following general guidance: - -- When locality is the only consideration for replication, it's recommended to set locality on nodes without specifying any constraints in zone configurations. In the absence of constraints, CockroachDB attempts to spread replicas evenly across the cluster based on locality. -- Required and prohibited constraints are useful in special situations where, for example, data must or must not be stored in a specific country or on a specific type of machine. -- Avoid conflicting constraints. CockroachDB returns an error if you: - - Redefine a required constraint key within the same `constraints` definition on all replicas. For example, `constraints = '[+region=west, +region=east]'` will result in an error. - - Define a required and prohibited definition for the same key-value pair. For example, `constraints = '[-region=west, +region=west]'` will result in an error. - -Constraint Type | Description | Syntax -----------------|-------------|------- -**Required** | When placing replicas, the cluster will consider only nodes/stores with matching attributes or localities. When there are no matching nodes/stores, new replicas will not be added. | `+ssd` -**Prohibited** | When placing replicas, the cluster will ignore nodes/stores with matching attributes or localities. When there are no alternate nodes/stores, new replicas will not be added. | `-ssd` - -#### Scope of constraints - -Constraints can be specified such that they apply to all replicas in a zone or such that different constraints apply to different replicas, meaning you can effectively pick the exact location of each replica. - -Constraint Scope | Description | Syntax ------------------|-------------|------- -**All Replicas** | Constraints specified using JSON array syntax apply to all replicas in every range that's part of the replication zone. | `constraints = '[+ssd, -region=west]'` -**Per-Replica** | Multiple lists of constraints can be provided in a JSON object, mapping each list of constraints to an integer number of replicas in each range that the constraints should apply to.

    The total number of replicas constrained cannot be greater than the total number of replicas for the zone (`num_replicas`). However, if the total number of replicas constrained is less than the total number of replicas for the zone, the non-constrained replicas will be allowed on any nodes/stores.

    Note that per-replica constraints must be "required" (e.g., `'{"+region=west": 1}'`); they cannot be "prohibited" (e.g., `'{"-region=west": 1}'`). Also, when defining per-replica constraints on a database or table, `num_replicas` must be specified as well, but not when defining per-replica constraints on an index or partition.

    See the [Per-replica constraints](#per-replica-constraints-to-specific-availability-zones) example for more details. | `constraints = '{"+ssd,+region=west": 2, "+region=east": 1}', num_replicas = 3` - -### Node/replica recommendations - -See [Cluster Topography](recommended-production-settings.html#topology) recommendations for production deployments. - -### Troubleshooting zone constraint violations - -To see if any of the data placement constraints defined in your replication zone configurations are being violated, use the `system.replication_constraint_stats` report as described in [Replication Reports](query-replication-reports.html). - -## View replication zones - -Use the [`SHOW ZONE CONFIGURATIONS`](#view-all-replication-zones) statement to view details about existing replication zones. - -You can also use the [`SHOW PARTITIONS`](show-partitions.html) statement to view the zone constraints on existing table partitions, or [`SHOW CREATE TABLE`](show-create.html) to view zone configurations for a table. - -{% include {{page.version.version}}/sql/crdb-internal-partitions.md %} - -## Basic examples - -{% include {{ page.version.version }}/sql/movr-statements-geo-partitioned-replicas.md %} - -These examples focus on the basic approach and syntax for working with zone configuration. For examples demonstrating how to use constraints, see [Scenario-based examples](#scenario-based-examples). - -For more examples, see [`CONFIGURE ZONE`](configure-zone.html) and [`SHOW ZONE CONFIGURATIONS`](show-zone-configurations.html). - -### View all replication zones - -{% include {{ page.version.version }}/zone-configs/view-all-replication-zones.md %} - -For more information, see [`SHOW ZONE CONFIGURATIONS`](show-zone-configurations.html). - -### View the default replication zone - -{% include {{ page.version.version }}/zone-configs/view-the-default-replication-zone.md %} - -For more information, see [`SHOW ZONE CONFIGURATIONS`](show-zone-configurations.html). - -### Edit the default replication zone - -{% include {{ page.version.version }}/zone-configs/edit-the-default-replication-zone.md %} - -For more information, see [`CONFIGURE ZONE`](configure-zone.html). - -### Create a replication zone for a system range - -{% include {{ page.version.version }}/zone-configs/create-a-replication-zone-for-a-system-range.md %} - -For more information, see [`CONFIGURE ZONE`](configure-zone.html). - -### Create a replication zone for a database - -{% include {{ page.version.version }}/zone-configs/create-a-replication-zone-for-a-database.md %} - -For more information, see [`CONFIGURE ZONE`](configure-zone.html). - -### Create a replication zone for a table - -{% include {{ page.version.version }}/zone-configs/create-a-replication-zone-for-a-table.md %} - -For more information, see [`CONFIGURE ZONE`](configure-zone.html). - -### Create a replication zone for a secondary index - -{% include {{ page.version.version }}/zone-configs/create-a-replication-zone-for-a-secondary-index.md %} - -For more information, see [`CONFIGURE ZONE`](configure-zone.html). - -### Create a replication zone for a partition - -{% include {{ page.version.version }}/zone-configs/create-a-replication-zone-for-a-table-partition.md %} - -For more information, see [`CONFIGURE ZONE`](configure-zone.html). - -### Reset a replication zone - -{% include {{ page.version.version }}/zone-configs/reset-a-replication-zone.md %} - -For more information, see [`CONFIGURE ZONE`](configure-zone.html). - -### Remove a replication zone - -{% include {{ page.version.version }}/zone-configs/remove-a-replication-zone.md %} - -For more information, see [`CONFIGURE ZONE`](configure-zone.html). - -### Constrain leaseholders to specific availability zones - -{% include {{ page.version.version }}/zone-configs/constrain-leaseholders-to-specific-datacenters.md %} - -For more information, see [`CONFIGURE ZONE`](configure-zone.html). - -## Scenario-based examples - -### Even replication across availability zones - -**Scenario:** - -- You have 6 nodes across 3 availability zones, 2 nodes in each availability zone. -- You want data replicated 3 times, with replicas balanced evenly across all three availability zones. - -**Approach:** - -1. Start each node with its availability zone location specified in the [`--locality`](cockroach-start.html#locality) flag: - - Availability zone 1: - - ~~~ shell - $ cockroach start --insecure --advertise-addr= --locality=az=us-1 \ - --join=,, - $ cockroach start --insecure --advertise-addr= --locality=az=us-1 \ - --join=,, - ~~~ - - Availability zone 2: - - ~~~ shell - $ cockroach start --insecure --advertise-addr= --locality=az=us-2 \ - --join=,, - $ cockroach start --insecure --advertise-addr= --locality=az=us-2 \ - --join=,, - ~~~ - - Availability zone 3: - - ~~~ shell - $ cockroach start --insecure --advertise-addr= --locality=az=us-3 \ - --join=,, - $ cockroach start --insecure --advertise-addr= --locality=az=us-3 \ - --join=,, - ~~~ - -2. Initialize the cluster: - - ~~~ shell - $ cockroach init --insecure --host= - ~~~ - -There's no need to make zone configuration changes; by default, the cluster is configured to replicate data three times, and even without explicit constraints, the cluster will aim to diversify replicas across node localities. - -### Per-replica constraints to specific availability zones - -**Scenario:** - -- You have 5 nodes across 5 availability zones in 3 regions, 1 node in each availability zone. -- You want data replicated 3 times, with a quorum of replicas for a database holding West Coast data centered on the West Coast and a database for nation-wide data replicated across the entire country. - -**Approach:** - -1. Start each node with its region and availability zone location specified in the [`--locality`](cockroach-start.html#locality) flag: - - Start the five nodes: - - ~~~ shell - $ cockroach start --insecure --advertise-addr= --locality=region=us-west1,az=us-west1-a \ - --join=,,,, - $ cockroach start --insecure --advertise-addr= --locality=region=us-west1,az=us-west1-b \ - --join=,,,, - $ cockroach start --insecure --advertise-addr= --locality=region=us-central1,az=us-central1-a \ - --join=,,,, - $ cockroach start --insecure --advertise-addr= --locality=region=us-east1,az=us-east1-a \ - --join=,,,, - $ cockroach start --insecure --advertise-addr= --locality=region=us-east1,az=us-east1-b \ - --join=,,,, - ~~~ - - Initialize the cluster: - - ~~~ shell - $ cockroach init --insecure --host= - ~~~ - -2. On any node, open the [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - -3. Create the database for the West Coast application: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE west_app_db; - ~~~ - -4. Configure a replication zone for the database: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER DATABASE west_app_db - CONFIGURE ZONE USING constraints = '{"+region=us-west1": 2, "+region=us-central1": 1}', num_replicas = 3; - ~~~ - - ~~~ - CONFIGURE ZONE 1 - ~~~ - -5. View the replication zone: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW ZONE CONFIGURATION FOR DATABASE west_app_db; - ~~~ - - ~~~ - target | raw_config_sql - +----------------------+--------------------------------------------------------------------+ - DATABASE west_app_db | ALTER DATABASE west_app_db CONFIGURE ZONE USING - | range_min_bytes = 134217728, - | range_max_bytes = 536870912, - | gc.ttlseconds = 90000, - | num_replicas = 3, - | constraints = '{+region=us-central1: 1, +region=us-west1: 2}', - | lease_preferences = '[]' - (1 row) - ~~~ - - Two of the database's three replicas will be put in `region=us-west1` and its remaining replica will be put in `region=us-central1`. This gives the application the resilience to survive the total failure of any one availability zone while providing low-latency reads and writes on the West Coast because a quorum of replicas are located there. - -6. No configuration is needed for the nation-wide database. The cluster is configured to replicate data 3 times and spread them as widely as possible by default. Because the first key-value pair specified in each node's locality is considered the most significant part of each node's locality, spreading data as widely as possible means putting one replica in each of the three different regions. - -### Multiple applications writing to different databases - -**Scenario:** - -- You have 2 independent applications connected to the same CockroachDB cluster, each application using a distinct database. -- You have 6 nodes across 2 availability zones, 3 nodes in each availability zone. -- You want the data for application 1 to be replicated 5 times, with replicas evenly balanced across both availability zones. -- You want the data for application 2 to be replicated 3 times, with all replicas in a single availability zone. - -**Approach:** - -1. Start each node with its availability zone location specified in the [`--locality`](cockroach-start.html#locality) flag: - - Availability zone 1: - - ~~~ shell - $ cockroach start --insecure --advertise-addr= --locality=az=us-1 \ - --join=,,,,, - $ cockroach start --insecure --advertise-addr= --locality=az=us-1 \ - --join=,,,,, - $ cockroach start --insecure --advertise-addr= --locality=az=us-1 \ - --join=,,,,, - ~~~ - - Availability zone 2: - - ~~~ shell - $ cockroach start --insecure --advertise-addr= --locality=az=us-2 \ - --join=,,,,, - $ cockroach start --insecure --advertise-addr= --locality=az=us-2 \ - --join=,,,,, - $ cockroach start --insecure --advertise-addr= --locality=az=us-2 \ - --join=,,,,, - ~~~ - - Initialize the cluster: - - ~~~ shell - $ cockroach init --insecure --host= - ~~~ - -2. On any node, open the [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - -3. Create the database for application 1: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE app1_db; - ~~~ - -4. Configure a replication zone for the database used by application 1: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER DATABASE app1_db CONFIGURE ZONE USING num_replicas = 5; - ~~~ - - ~~~ - CONFIGURE ZONE 1 - ~~~ - -5. View the replication zone: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW ZONE CONFIGURATION FOR DATABASE app1_db; - ~~~ - - ~~~ - target | raw_config_sql - +------------------+---------------------------------------------+ - DATABASE app1_db | ALTER DATABASE app1_db CONFIGURE ZONE USING - | range_min_bytes = 134217728, - | range_max_bytes = 536870912, - | gc.ttlseconds = 90000, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' - (1 row) - ~~~ - - Nothing else is necessary for application 1's data. Since all nodes specify their availability zone locality, the cluster will aim to balance the data in the database used by application 1 between availability zones 1 and 2. - -6. Still in the SQL client, create a database for application 2: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE app2_db; - ~~~ - -7. Configure a replication zone for the database used by application 2: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER DATABASE app2_db CONFIGURE ZONE USING constraints = '[+az=us-2]'; - ~~~ - -8. View the replication zone: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW ZONE CONFIGURATION FOR DATABASE app2_db; - ~~~ - - ~~~ - target | raw_config_sql - +------------------+---------------------------------------------+ - DATABASE app2_db | ALTER DATABASE app2_db CONFIGURE ZONE USING - | range_min_bytes = 134217728, - | range_max_bytes = 536870912, - | gc.ttlseconds = 90000, - | num_replicas = 3, - | constraints = '[+az=us-2]', - | lease_preferences = '[]' - (1 row) - ~~~ - - The required constraint will force application 2's data to be replicated only within the `us-2` availability zone. - -### Stricter replication for a table and its secondary indexes - -**Scenario:** - -- You have 7 nodes, 5 with SSD drives and 2 with HDD drives. -- You want data replicated 3 times by default. -- Speed and availability are important for a specific table and its indexes, which are queried very frequently, however, so you want the data in the table and secondary indexes to be replicated 5 times, preferably on nodes with SSD drives. - -**Approach:** - -1. Start each node with `ssd` or `hdd` specified as store attributes: - - 5 nodes with SSD storage: - - ~~~ shell - $ cockroach start --insecure --advertise-addr= --store=path=node1,attrs=ssd \ - --join=,, - $ cockroach start --insecure --advertise-addr= --store=path=node2,attrs=ssd \ - --join=,, - $ cockroach start --insecure --advertise-addr= --store=path=node3,attrs=ssd \ - --join=,, - $ cockroach start --insecure --advertise-addr= --store=path=node4,attrs=ssd \ - --join=,, - $ cockroach start --insecure --advertise-addr= --store=path=node5,attrs=ssd \ - --join=,, - ~~~ - - 2 nodes with HDD storage: - - ~~~ shell - $ cockroach start --insecure --advertise-addr= --store=path=node6,attrs=hdd \ - --join=,, - $ cockroach start --insecure --advertise-addr= --store=path=node7,attrs=hdd \ - --join=,, - ~~~ - - Initialize the cluster: - - ~~~ shell - $ cockroach init --insecure --host= - ~~~ - -2. On any node, open the [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - -3. Create a database and table: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE db; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE db.important_table; - ~~~ - -4. Configure a replication zone for the table that must be replicated more strictly: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER TABLE db.important_table CONFIGURE ZONE USING num_replicas = 5, constraints = '[+ssd]' - ~~~ - -5. View the replication zone: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW ZONE CONFIGURATION FOR TABLE db.important_table; - ~~~ - - ~~~ - target | config_sql - +-------------------------------+---------------------------------------------+ - TABLE db.important_table | ALTER DATABASE app2_db CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 536870912, - | gc.ttlseconds = 90000, - | num_replicas = 5, - | constraints = '[+ssd]', - | lease_preferences = '[]' - (1 row) - ~~~ - - The secondary indexes on the table will use the table's replication zone, so all data for the table will be replicated 5 times, and the required constraint will place the data on nodes with `ssd` drives. - -### Tweaking the replication of system ranges - -**Scenario:** - -- You have nodes spread across 7 availability zones. -- You want data replicated 5 times by default. -- For better performance, you want a copy of the meta ranges in all of the availability zones. -- To save disk space, you only want the internal timeseries data replicated 3 times by default. - -**Approach:** - -1. Start each node with a different [locality](cockroach-start.html#locality) attribute: - - ~~~ shell - $ cockroach start --insecure --advertise-addr= --locality=az=us-1 \ - --join=,, - $ cockroach start --insecure --advertise-addr= --locality=az=us-2 \ - --join=,, - $ cockroach start --insecure --advertise-addr= --locality=az=us-3 \ - --join=,, - $ cockroach start --insecure --advertise-addr= --locality=az=us-4 \ - --join=,, - $ cockroach start --insecure --advertise-addr= --locality=az=us-5 \ - --join=,, - $ cockroach start --insecure --advertise-addr= --locality=az=us-6 \ - --join=,, - $ cockroach start --insecure --advertise-addr= --locality=az=us-7 \ - --join=,, - ~~~ - - Initialize the cluster: - - ~~~ shell - $ cockroach init --insecure --host= - ~~~ - -2. On any node, open the [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - -3. Configure the default replication zone: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER RANGE default CONFIGURE ZONE USING num_replicas = 5; - ~~~ - -4. View the replication zone: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW ZONE CONFIGURATION FOR RANGE default; - ~~~ - ~~~ - target | raw_config_sql - +---------------+------------------------------------------+ - RANGE default | ALTER RANGE default CONFIGURE ZONE USING - | range_min_bytes = 134217728, - | range_max_bytes = 536870912, - | gc.ttlseconds = 90000, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' - (1 row) - ~~~ - - All data in the cluster will be replicated 5 times, including both SQL data and the internal system data. - -5. Configure the `meta` replication zone: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER RANGE meta CONFIGURE ZONE USING num_replicas = 7; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW ZONE CONFIGURATION FOR RANGE meta; - ~~~ - ~~~ - target | raw_config_sql - +------------+---------------------------------------+ - RANGE meta | ALTER RANGE meta CONFIGURE ZONE USING - | range_min_bytes = 134217728, - | range_max_bytes = 536870912, - | gc.ttlseconds = 3600, - | num_replicas = 7, - | constraints = '[]', - | lease_preferences = '[]' - (1 row) - ~~~ - - The `meta` addressing ranges will be replicated such that one copy is in all 7 availability zones, while all other data will be replicated 5 times. - -6. Configure the `timeseries` replication zone: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER RANGE timeseries CONFIGURE ZONE USING num_replicas = 3; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW ZONE CONFIGURATION FOR RANGE timeseries; - ~~~ - ~~~ - target | raw_config_sql - +------------------+---------------------------------------------+ - RANGE timeseries | ALTER RANGE timeseries CONFIGURE ZONE USING - | range_min_bytes = 134217728, - | range_max_bytes = 536870912, - | gc.ttlseconds = 90000, - | num_replicas = 3, - | constraints = '[]', - | lease_preferences = '[]' - (1 row) - ~~~ - - The timeseries data will only be replicated 3 times without affecting the configuration of all other data. - -## See also - -- [`SHOW ZONE CONFIGURATIONS`](show-zone-configurations.html) -- [`CONFIGURE ZONE`](configure-zone.html) -- [`SHOW PARTITIONS`](show-partitions.html) -- [SQL Statements](sql-statements.html) -- [Table Partitioning](partitioning.html) -- [Replication Reports](query-replication-reports.html) diff --git a/src/current/v22.1/configure-zone.md b/src/current/v22.1/configure-zone.md deleted file mode 100644 index ae5de0d0690..00000000000 --- a/src/current/v22.1/configure-zone.md +++ /dev/null @@ -1,132 +0,0 @@ ---- -title: CONFIGURE ZONE -summary: Use the CONFIGURE ZONE statement to add, modify, reset, and remove replication zones. -toc: true -docs_area: reference.sql ---- - -`CONFIGURE ZONE` is a subcommand of the `ALTER DATABASE`, `ALTER TABLE`, `ALTER INDEX`, `ALTER PARTITION`, and [`ALTER RANGE`](alter-range.html) statements and is used to add, modify, reset, or remove [replication zones](configure-replication-zones.html) for those objects. To view details about existing replication zones, see [`SHOW ZONE CONFIGURATIONS`](show-zone-configurations.html). - -In CockroachDB, you can use **replication zones** to control the number and location of replicas for specific sets of data, both when replicas are first added and when they are rebalanced to maintain cluster equilibrium. - -{{site.data.alerts.callout_info}} -Adding replication zones for secondary indexes and partitions is an [Enterprise-only](enterprise-licensing.html) feature. -{{site.data.alerts.end}} - -## Synopsis - -**alter_zone_database_stmt ::=** - -
    -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_zone_database.html %} -
    - -**alter_zone_table_stmt ::=** - -
    -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_zone_table.html %} -
    - -**alter_zone_index_stmt ::=** - -
    -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_zone_index.html %} -
    - -**alter_zone_partition_stmt ::=** - -
    -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_zone_partition.html %} -
    - -**alter_zone_range_stmt ::=** - -
    -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_zone_range.html %} -
    - -## Required privileges - -If the target is a [`system` range](#create-a-replication-zone-for-a-system-range), the [`system` database](show-databases.html#preloaded-databases), or a table in the `system` database, the user must be a member of the [`admin` role](security-reference/authorization.html#admin-role). For all other databases and tables, the user must have been granted either the [`CREATE`](grant.html#supported-privileges) or the [`ZONECONFIG`](grant.html#supported-privileges) privilege on the target database or table. - -## Parameters - - Parameter | Description ------------+------------- -`range_name` | The name of the system [range](architecture/glossary.html#architecture-range) whose [replication zone configurations](configure-replication-zones.html) you want to change. -`database_name` | The name of the [database](create-database.html) whose [replication zone configurations](configure-replication-zones.html) you want to change.
    If you directly change a database's zone configuration with `ALTER DATABASE ... CONFIGURE ZONE`, CockroachDB will block all [`ALTER DATABASE ... SET PRIMARY REGION`](set-primary-region.html) statements on the database. -`table_name` | The name of the [table](create-table.html) whose [replication zone configurations](configure-replication-zones.html) you want to change. -`partition_name` | The name of the [partition](partitioning.html) whose [replication zone configurations](configure-replication-zones.html) you want to change. -`index_name` | The name of the [index](indexes.html) whose [replication zone configurations](configure-replication-zones.html) you want to change. -`variable` | The name of the [variable](#variables) to change. -`value` | The value of the variable to change. -`DISCARD` | Remove a replication zone. - -### Variables - -{% include {{ page.version.version }}/zone-configs/variables.md %} - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -{% include {{ page.version.version }}/sql/movr-statements-geo-partitioned-replicas.md %} - -### Edit a replication zone - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE users CONFIGURE ZONE USING range_min_bytes = 0, range_max_bytes = 90000, gc.ttlseconds = 89999, num_replicas = 4; -~~~ - -~~~ -CONFIGURE ZONE 1 -~~~ - -### Edit the default replication zone - -{% include {{ page.version.version }}/zone-configs/edit-the-default-replication-zone.md %} - -### Create a replication zone for a database - -{% include {{ page.version.version }}/zone-configs/create-a-replication-zone-for-a-database.md %} - -### Create a replication zone for a table - -{% include {{ page.version.version }}/zone-configs/create-a-replication-zone-for-a-table.md %} - -### Create a replication zone for a secondary index - -{% include {{ page.version.version }}/zone-configs/create-a-replication-zone-for-a-secondary-index.md %} - -### Create a replication zone for a partition - -{% include {{ page.version.version }}/zone-configs/create-a-replication-zone-for-a-table-partition.md %} - -### Create a replication zone for a system range - -{% include {{ page.version.version }}/zone-configs/create-a-replication-zone-for-a-system-range.md %} - -### Reset a replication zone - -{% include {{ page.version.version }}/zone-configs/reset-a-replication-zone.md %} - -### Remove a replication zone - -{% include {{ page.version.version }}/zone-configs/remove-a-replication-zone.md %} - -## See also - -- [Configure Replication Zones](configure-replication-zones.html) -- [`PARTITION BY`](partition-by.html) -- [`SHOW ZONE CONFIGURATIONS`](show-zone-configurations.html) -- [`ALTER DATABASE`](alter-database.html) -- [`ALTER TABLE`](alter-table.html) -- [`ALTER INDEX`](alter-index.html) -- [`ALTER PARTITION`](alter-partition.html) -- [`ALTER RANGE`](alter-range.html) -- [`SHOW JOBS`](show-jobs.html) -- [Table Partitioning](partitioning.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v22.1/connect-to-the-database.md b/src/current/v22.1/connect-to-the-database.md deleted file mode 100644 index 7f16cbfd273..00000000000 --- a/src/current/v22.1/connect-to-the-database.md +++ /dev/null @@ -1,1358 +0,0 @@ ---- -title: Connect to a CockroachDB Cluster -summary: How to connect to a CockroachDB cluster from your application -toc: true -docs_area: develop ---- - -This page documents the required connection configuration for [fully-supported third-party tools]({% link {{ page.version.version }}/third-party-database-tools.md %}). - -For a list of all supported cluster connection parameters, see the [`cockroach` Connection Parameters]({% link {{ page.version.version }}/connection-parameters.md %}). - -For a list of community-supported third-party tools, see [Third-Party Tools Supported by the Community]({% link {{ page.version.version }}/community-tooling.md %}). CockroachDB supports both native drivers and the PostgreSQL wire protocol. Most client drivers and ORM frameworks connect to CockroachDB like they connect to PostgreSQL. - -## Step 1. Select your deployment - -
    - - - -
    - -
    -To connect to a CockroachDB {{ site.data.products.cloud }} cluster, you need a general connection string or connection parameters, which include the username, host, database, and port. To find these, open the **Connect** dialog for your cluster in the [CockroachDB {{ site.data.products.cloud }} Console](https://cockroachlabs.cloud) and select either **General connection string** or **Parameters only** as the option. -
    - -
    -To connect to a CockroachDB {{ site.data.products.core }} cluster, you need the [general connection string]({% link {{ page.version.version }}/connection-parameters.md %}#connect-using-a-url) or [connection parameters]({% link {{ page.version.version }}/connection-parameters.md %}#connect-using-discrete-parameters) for your cluster. - -The connection strings and parameters for your cluster are output when you [start the cluster]({% link {{ page.version.version }}/cockroach-start.md %}#standard-output). -
    - -## Step 2. Select your OS - -
    - - - -
    - -## Step 3. Select your language - -
    - - - - - -
    - -## Step 4. Select your driver or ORM - -
    -
    - - - - -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - - -
    -
    - -
    -
    - - -
    -
    - -
    -
    - - -
    -
    - -## Step 5. Connect to the cluster - -
    -
    - -{% include {{ page.version.version }}/connect/connection-url.md %} - -
    -
    - -
    -
    - -{% include {{ page.version.version }}/connect/connection-url.md %} - -
    -
    - -
    -
    - -{% include {{ page.version.version }}/connect/connection-url.md %} - -
    -
    - -
    -
    - -{% include {{ page.version.version }}/connect/connection-url.md %} - -
    -
    - -
    - -
    -The **Connect to cluster** dialog shows information about how to connect to your cluster. - -1. Select **Java** from the **Select option** dropdown. -1. Copy the `JDBC_DATABASE_URL` environment variable command provided and save it in a secure location. -
    - -
    -Copy the JDBC connection string from the `sql (JDBC)` field in the output from when you started the cluster. -
    - -{% include {{ page.version.version }}/connect/jdbc-connection-url.md %} - -
    - -
    - -
    - -To connect to CockroachDB with [node-postgres](https://node-postgres.com), create a new [`Client`](https://node-postgres.com/apis/client) object with a connection string. - -For example: - -{% include_cached copy-clipboard.html %} -~~~ js -const { Client } = require('pg') - -const client = new Client(process.env.DATABASE_URL) - -client.connect() -~~~ - -Where `DATABASE_URL` is an environment variable set to a valid CockroachDB connection string. - -node-postgres accepts the following format for CockroachDB connection strings: - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://:@:/?sslmode=verify-full -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://:@:/?sslmode=verify-full&sslrootcert= -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://@:/?sslmode=verify-full&sslrootcert=&sslcert=&sslkey= -~~~ - -{% include {{ page.version.version }}/connect/core-note.md %} - -
    - -For more information about connecting with node-postgres, see the [official node-postgres documentation](https://node-postgres.com/features/connecting). - -
    - -
    - -To connect to CockroachDB with [Sequelize](https://sequelize.org), create a `Sequelize` object with the [CockroachDB Sequelize adapter](https://github.com/cockroachdb/sequelize-cockroachdb). - -For example: - -{% include_cached copy-clipboard.html %} -~~~ js -const Sequelize = require("sequelize-cockroachdb"); - -const connectionString = process.env.DATABASE_URL -const sequelize = new Sequelize(connectionString) -~~~ - -Where `DATABASE_URL` is an environment variable set to a valid CockroachDB connection string. - -Sequelize accepts the following format for CockroachDB connection strings: - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://:@:/?sslmode=verify-full -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://:@:/?sslmode=verify-full&sslrootcert= -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://@:/?sslmode=verify-full&sslrootcert=&sslcert=&sslkey= -~~~ - -{% include {{ page.version.version }}/connect/core-note.md %} - -
    - -{{site.data.alerts.callout_info}} -To connect to CockroachDB with Sequelize, you must install the [CockroachDB Sequelize adapter](https://github.com/cockroachdb/sequelize-cockroachdb). -{{site.data.alerts.end}} - -For more information about connecting with Sequelize, see the [official Sequelize documentation](https://sequelize.org/docs/v6/). - -
    - -
    - -
    - -To connect to CockroachDB with [TypeORM](https://typeorm.io), update your project's [`DataSource`](https://typeorm.io/data-source) with the required connection properties. - -For example, suppose that you are defining the `DataSource` for your application in a file named `datasource.ts`. - -
    - -
    - -CockroachDB {{ site.data.products.basic }} and {{ site.data.products.standard }} requires you to specify the `type`, `url`, and `ssl` properties: - -{% include_cached copy-clipboard.html %} -~~~ ts -import { DataSource } from "typeorm" - -export const AppDataSource = new DataSource({ - type: "cockroachdb", - url: process.env.DATABASE_URL, - ssl: true, - ... -}); -~~~ - -Where `DATABASE_URL` is an environment variable set to a valid CockroachDB connection string. - -TypeORM accepts the following format for CockroachDB connection strings: - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://:@:/ -~~~ - -
    - -
    - -CockroachDB {{ site.data.products.advanced }} requires you to specify the `type`, `url`, and `ssl` properties: - -{% include_cached copy-clipboard.html %} -~~~ ts -import { DataSource } from "typeorm" - -export const AppDataSource = new DataSource({ - type: "cockroachdb", - url: process.env.DATABASE_URL, - ssl: { - ca: process.env.CA_CERT - }, - ... -}); -~~~ - -Where: - -- `DATABASE_URL` is an environment variable set to a valid CockroachDB connection string. -- `CA_CERT` is an environment variable set to the root certificate [downloaded from the CockroachDB Cloud Console]({% link cockroachcloud/authentication.md %}#node-identity-verification). - -TypeORM accepts the following format for CockroachDB connection strings: - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://:@:/ -~~~ - -
    - -
    - -CockroachDB {{ site.data.products.core }} requires you to specify the `type`, `url`, and `ssl` properties: - -{% include_cached copy-clipboard.html %} -~~~ ts -import { DataSource } from "typeorm" - -export const AppDataSource = new DataSource({ - type: "cockroachdb", - url: process.env.DATABASE_URL, - ssl: { - ca: process.env.CA_CERT, - key: process.env.CLIENT_KEY, - cert: process.env.CLIENT_CERT - }, - ... -}); -~~~ - -Where: - -- `DATABASE_URL` is an environment variable set to a valid CockroachDB connection string. -- `CA_CERT` is an environment variable set to the root certificate.
    You can generate this certificate with [`cockroach cert create-ca`]({% link {{ page.version.version }}/cockroach-cert.md %}#subcommands), or you can use a [custom CA cert]({% link {{ page.version.version }}/create-security-certificates-custom-ca.md %}). -- `CLIENT_KEY` is an environment variable set to the [client key]({% link {{ page.version.version }}/cockroach-cert.md %}#client-key-and-certificates) for the user connecting to the cluster.
    You can generate this key with [`cockroach cert create-client`]({% link {{ page.version.version }}/cockroach-cert.md %}#subcommands). -- `CLIENT_CERT` is an environment variable set to the [client certificate]({% link {{ page.version.version }}/cockroach-cert.md %}#client-key-and-certificates) for the user connecting to the cluster.
    You can generate this certificate with [`cockroach cert create-client`]({% link {{ page.version.version }}/cockroach-cert.md %}#subcommands). - -{% include {{ page.version.version }}/connect/core-note.md %} - -TypeORM accepts the following format for CockroachDB connection strings: - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://@:/ -~~~ - -
    - -You can then import the `AppDataSource` into any file in your project and call `AppDataSource.initialize()` to connect to CockroachDB: - -{% include_cached copy-clipboard.html %} -~~~ ts -import { AppDataSource } from "./datasource"; - -AppDataSource.initialize() - .then(async () => { - // Execute operations - }); -~~~ - -For more information about connecting with TypeORM, see the [official TypeORM documentation](https://typeorm.io/#/). - -
    - -
    - -To connect to CockroachDB with [Prisma](https://prisma.io/), set the `url` field of the `datasource` block in your Prisma schema to your database connection URL: - -{% include_cached copy-clipboard.html %} -~~~ js -generator client { - provider = "prisma-client-js" -} - -datasource db { - provider = "cockroachdb" - url = env("DATABASE_URL") -} - -model Widget { - id String @id @default(dbgenerated("gen_random_uuid()")) @db.Uuid -} -~~~ - -Where `DATABASE_URL` is an environment variable set to a valid CockroachDB connection string. - -Prisma accepts the following format for CockroachDB connection strings: - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://:@:/?sslmode=verify-full -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://:@:/?sslmode=verify-full&sslrootcert= -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://@:/?sslmode=verify-full&sslrootcert=&sslcert=&sslkey= -~~~ - -{% include {{ page.version.version }}/connect/core-note.md %} - -
    - -For more information about connecting with Prisma, see the [official Prisma documentation](https://www.prisma.io/cockroachdb). - -
    - -## Connection parameters - -
    - -Parameter | Description -----------|------------ -`` | The [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users) connecting to the cluster. -`` | The password for the SQL user connecting to the cluster. -`` | The host on which the CockroachDB node is running. -`` | The port at which the CockroachDB node is listening. -`` | The name of the (existing) database. - -
    - -
    - -Parameter | Description -----------|------------ -`` | The [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users) connecting to the cluster. -`` | The password for the SQL user connecting to the cluster. -`` | The host on which the CockroachDB node is running. -`` | The port at which the CockroachDB node is listening. -`` | The name of the (existing) database. -`` | The path to the root certificate that you [downloaded from the CockroachDB Cloud Console]({% link cockroachcloud/authentication.md %}#node-identity-verification). - -
    - -
    - -Parameter | Description -----------|------------ -`` | The [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users) connecting to the cluster. -`` | The host on which the CockroachDB node is running. -`` | The port at which the CockroachDB node is listening. -`` | The name of the (existing) database. -`` | The path to the root certificate.
    You can generate this certificate with [`cockroach cert create-ca`]({% link {{ page.version.version }}/cockroach-cert.md %}#subcommands), or you can use a [custom CA cert]({% link {{ page.version.version }}/create-security-certificates-custom-ca.md %}). -`` | The path to the [client certificate]({% link {{ page.version.version }}/cockroach-cert.md %}#client-key-and-certificates) for the user connecting to the cluster.
    You can generate this certificate with [`cockroach cert create-client`]({% link {{ page.version.version }}/cockroach-cert.md %}#subcommands). -`` | The path to the [client key]({% link {{ page.version.version }}/cockroach-cert.md %}#client-key-and-certificates) for the user connecting to the cluster.
    You can generate this key with [`cockroach cert create-client`]({% link {{ page.version.version }}/cockroach-cert.md %}#subcommands). - -{% include {{ page.version.version }}/connect/core-note.md %} - -
    - -
    - -
    - -
    - -{{site.data.alerts.callout_info}} -To connect to a CockroachDB {{ site.data.products.basic }} or {{ site.data.products.standard }} cluster from a Python application, you must have a valid CA certificate located at ~/.postgresql/root.crt.
    For instructions on downloading a CA certificate from the CockroachDB {{ site.data.products.cloud }} Console, see [Connect to a CockroachDB {{ site.data.products.basic }} Cluster]({% link cockroachcloud/connect-to-a-basic-cluster.md %}) or [Connect to a CockroachDB {{ site.data.products.standard }} Cluster]({% link cockroachcloud/connect-to-your-cluster.md %}. -{{site.data.alerts.end}} - -
    - -
    - -To connect to CockroachDB with [Psycopg2](https://www.psycopg.org), pass a connection string to the [`psycopg2.connect` function](https://www.psycopg.org/docs/connection.html). - -For example: - -{% include_cached copy-clipboard.html %} -~~~ python -import psycopg2 -import os - -conn = psycopg2.connect(os.environ['DATABASE_URL']) -~~~ - -Where `DATABASE_URL` is an environment variable set to a valid CockroachDB connection string. - -Psycopg2 accepts the following format for CockroachDB connection strings: - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://{username}:{password}@{host}:{port}/{database}?sslmode=verify-full -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://{username}:{password}@{host}:{port}/{database}?sslmode=verify-full&sslrootcert={root-cert} -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://{username}@{host}:{port}/{database}?sslmode=verify-full&sslrootcert={root-cert}&sslcert={client-cert}&sslkey={client-key} -~~~ - -{% include {{ page.version.version }}/connect/core-note.md %} - -
    - -For more information about connecting with Psycopg, see the [official Psycopg documentation](https://www.psycopg.org/docs). - -
    - -
    - -To connect to CockroachDB with [Psycopg3](https://www.psycopg.org), pass a connection string to the [`psycopg.connect` function](https://www.psycopg.org/psycopg3/docs/basic/usage.html). - -For example: - -{% include_cached copy-clipboard.html %} -~~~ python -import psycopg -import os - -with psycopg.connect(os.environ['DATABASE_URL']) as conn: - # application logic here -~~~ - -Where `DATABASE_URL` is an environment variable set to a valid CockroachDB connection string. - -Psycopg accepts the following format for CockroachDB connection strings: - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://{username}:{password}@{host}:{port}/{database}?sslmode=verify-full -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://{username}:{password}@{host}:{port}/{database}?sslmode=verify-full&sslrootcert={root-cert} -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://{username}@{host}:{port}/{database}?sslmode=verify-full&sslrootcert={root-cert}&sslcert={client-cert}&sslkey={client-key} -~~~ - -{% include {{ page.version.version }}/connect/core-note.md %} - -
    - -For more information about connecting with Psycopg, see the [official Psycopg documentation](https://www.psycopg.org/psycopg3/docs/basic/index.html). - -
    - -
    - -To connect to CockroachDB with [SQLAlchemy](http://docs.sqlalchemy.org/), [create an `Engine` object](https://docs.sqlalchemy.org/core/engines.html) by passing the connection string to the `create_engine` function. - -For example: - -{% include_cached copy-clipboard.html %} -~~~ python -from sqlalchemy import create_engine -import os - -engine = create_engine(os.environ['DATABASE_URL']) -engine.connect() -~~~ - -Where `DATABASE_URL` is an environment variable set to a valid CockroachDB connection string. - -SQLAlchemy accepts the following format for CockroachDB connection strings: - -
    - -{% include_cached copy-clipboard.html %} -~~~ -cockroachdb://{username}:{password}@{host}:{port}/{database}?sslmode=verify-full -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -cockroachdb://{username}:{password}@{host}:{port}/{database}?sslmode=verify-full&sslrootcert={root-cert} -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -cockroachdb://{username}@{host}:{port}/{database}?sslmode=verify-full&sslrootcert={root-cert}&sslcert={client-cert}&sslkey={client-key} -~~~ - -{% include {{ page.version.version }}/connect/core-note.md %} - -
    - -{{site.data.alerts.callout_info}} -To connect to CockroachDB with SQLAlchemy, you must install the [CockroachDB SQLAlchemy adapter](https://github.com/cockroachdb/sqlalchemy-cockroachdb). -{{site.data.alerts.end}} - -For more information about connecting with SQLAlchemy, see the [official SQLAlchemy documentation](https://docs.sqlalchemy.org/core/engines_connections.html). - -{{site.data.alerts.callout_info}} -In order for SQLAlchemy to use the CockroachDB adapter, the connection string must begin with `cockroachdb://`. You can use the following code to modify the general connection string, which begins with `postgresql://`, to the format that works with SQLAlchemy and the CockroachDB adapter: - -{% include_cached copy-clipboard.html %} -~~~ python -engine = create_engine(os.environ['DATABASE_URL'].replace("postgresql://", "cockroachdb://")) -engine.connect() -~~~ - -{{site.data.alerts.end}} - -
    - -
    - -
    - -To connect to CockroachDB from a [Django](https://www.djangoproject.com) application, update the `DATABASES` property in the project's `settings.py` file. - -Django accepts the following format for CockroachDB connection information: - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -## settings.py - -... - -DATABASES = { - 'default': { - 'ENGINE': 'django_cockroachdb', - 'NAME': '{database}', - 'USER': '{username}', - 'PASSWORD': '{password}', - 'HOST': '{host}', - 'PORT': '{port}', - 'OPTIONS': { - 'sslmode': 'verify-full' - }, - }, -} - -... -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -## settings.py - -... - -DATABASES = { - 'default': { - 'ENGINE': 'django_cockroachdb', - 'NAME': '{database}', - 'USER': '{username}', - 'PASSWORD': '{password}', - 'HOST': '{host}', - 'PORT': '{port}', - 'OPTIONS': { - 'sslmode': 'verify-full', - 'sslrootcert': os.path.expandvars('{root-cert}'), - }, - }, -} - -... -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -## settings.py - -... - -DATABASES = { - 'default': { - 'ENGINE': 'django_cockroachdb', - 'NAME': '{database}', - 'USER': '{username}', - 'HOST': '{host}', - 'PORT': '{port}', - 'OPTIONS': { - 'sslmode': 'verify-full', - 'sslrootcert': os.path.expandvars('{root-cert}'), - 'sslcert': os.path.expandvars('{client-cert}'), - 'sslkey': os.path.expandvars('{client-key}'), - }, - }, -} - -... -~~~ - -{% include {{ page.version.version }}/connect/core-note.md %} - -
    - -{{site.data.alerts.callout_info}} -To connect to CockroachDB with Django, you must install the [CockroachDB Django adapter](https://github.com/cockroachdb/django-cockroachdb). -{{site.data.alerts.end}} - -For more information about connecting with Django, see the [official Django documentation](https://docs.djangoproject.com/en/4.0/). - -
    - -## Connection parameters - -
    - -Parameter | Description -----------|------------ -`{username}` | The [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users) connecting to the cluster. -`{password}` | The password for the SQL user connecting to the cluster. -`{host}` | The host on which the CockroachDB node is running. -`{port}` | The port at which the CockroachDB node is listening. -`{database}` | The name of the (existing) database. - -
    - -
    - -Parameter | Description -----------|------------ -`{username}` | The [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users) connecting to the cluster. -`{password}` | The password for the SQL user connecting to the cluster. -`{host}` | The host on which the CockroachDB node is running. -`{port}` | The port at which the CockroachDB node is listening. -`{database}` | The name of the (existing) database. -`{root-cert}` | The path to the root certificate that you [downloaded from the CockroachDB Cloud Console]({% link cockroachcloud/authentication.md %}#node-identity-verification). - -
    - -
    - -Parameter | Description -----------|------------ -`{username}` | The [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users) connecting to the cluster. -`{host}` | The host on which the CockroachDB node is running. -`{port}` | The port at which the CockroachDB node is listening. -`{database}` | The name of the (existing) database. -`{root-cert}` | The path to the root certificate.
    You can generate this certificate with [`cockroach cert create-ca`]({% link {{ page.version.version }}/cockroach-cert.md %}#subcommands), or you can use a [custom CA cert]({% link {{ page.version.version }}/create-security-certificates-custom-ca.md %}). -`{client-cert}` | The path to the [client certificate]({% link {{ page.version.version }}/cockroach-cert.md %}#client-key-and-certificates) for the user connecting to the cluster.
    You can generate this certificate with [`cockroach cert create-client`]({% link {{ page.version.version }}/cockroach-cert.md %}#subcommands). -`{client-key}` | The path to the [client key]({% link {{ page.version.version }}/cockroach-cert.md %}#client-key-and-certificates) for the user connecting to the cluster.
    You can generate this key with [`cockroach cert create-client`]({% link {{ page.version.version }}/cockroach-cert.md %}#subcommands). - -{% include {{ page.version.version }}/connect/core-note.md %} - -
    - -
    - -
    - -
    - -To connect to CockroachDB with [pgx](https://github.com/jackc/pgx), use the `pgx.Connect` function. - -For example: - -{% include_cached copy-clipboard.html %} -~~~ go -package main - -import ( - "context" - "log" - - "github.com/jackc/pgx/v4" -) - -func main() { - conn, err := pgx.Connect(context.Background(), "") - if err != nil { - log.Fatal(err) - } - defer conn.Close(context.Background()) -} -~~~ - -pgx accepts the following format for CockroachDB connection strings: - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://{username}:{password}@{host}:{port}/{database}?sslmode=verify-full -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://{username}:{password}@{host}:{port}/{database}?sslmode=verify-full&sslrootcert={root-cert} -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://{username}@{host}:{port}/{database}?sslmode=verify-full&sslrootcert={root-cert}&sslcert={client-cert}&sslkey={client-key} -~~~ - -{% include {{ page.version.version }}/connect/core-note.md %} - -
    - -For more information about connecting with pgx, see the [official pgx documentation](https://pkg.go.dev/github.com/jackc/pgx). - -
    - -
    - -To connect to CockroachDB with [pq](https://godoc.org/github.com/lib/pq), use the [`sql.Open` function](https://go.dev/doc/tutorial/database-access). - -For example: - -{% include_cached copy-clipboard.html %} -~~~ go -package main - -import ( - "database/sql" - "log" - - _ "github.com/lib/pq" -) - -func main() { - db, err := sql.Open("postgres", "") - if err != nil { - log.Fatal(err) - } - defer db.Close() -} -~~~ - -pq accepts the following format for CockroachDB connection strings: - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://{username}:{password}@{host}:{port}/{database}?sslmode=verify-full -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://{username}:{password}@{host}:{port}/{database}?sslmode=verify-full&sslrootcert={root-cert} -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://{username}@{host}:{port}/{database}?sslmode=verify-full&sslrootcert={root-cert}&sslcert={client-cert}&sslkey={client-key} -~~~ - -{% include {{ page.version.version }}/connect/core-note.md %} - -
    - -For more information about connecting with pq, see the [official pq documentation](https://pkg.go.dev/github.com/lib/pq). - -
    - -
    - - -To connect to CockroachDB with [GORM](http://gorm.io), use the `gorm.Open` function, with the GORM `postgres` driver. - -For example: - -{% include_cached copy-clipboard.html %} -~~~ go -package main - -import ( - "log" - - "gorm.io/driver/postgres" - "gorm.io/gorm" -) - -func main() { - db, err := gorm.Open(postgres.Open(""), &gorm.Config{}) - if err != nil { - log.Fatal(err) - } -} -~~~ - -GORM accepts the following format for CockroachDB connection strings: - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://{username}:{password}@{host}:{port}/{database}?sslmode=verify-full -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://{username}:{password}@{host}:{port}/{database}?sslmode=verify-full&sslrootcert={root-cert} -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://{username}@{host}:{port}/{database}?sslmode=verify-full&sslrootcert={root-cert}&sslcert={client-cert}&sslkey={client-key} -~~~ - -{% include {{ page.version.version }}/connect/core-note.md %} - -
    - -For more information about connecting with GORM, see the [official GORM documentation](https://gorm.io/docs). - -
    - -## Connection parameters - -
    - -Parameter | Description -----------|------------ -`{username}` | The [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users) connecting to the cluster. -`{password}` | The password for the SQL user connecting to the cluster. -`{host}` | The host on which the CockroachDB node is running. -`{port}` | The port at which the CockroachDB node is listening. -`{database}` | The name of the (existing) database. - -
    - -
    - -Parameter | Description -----------|------------ -`{username}` | The [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users) connecting to the cluster. -`{password}` | The password for the SQL user connecting to the cluster. -`{host}` | The host on which the CockroachDB node is running. -`{port}` | The port at which the CockroachDB node is listening. -`{database}` | The name of the (existing) database. -`{root-cert}` | The path to the root certificate that you [downloaded from the CockroachDB Cloud Console]({% link cockroachcloud/authentication.md %}#node-identity-verification). - -
    - -
    - -Parameter | Description -----------|------------ -`{username}` | The [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users) connecting to the cluster. -`{host}` | The host on which the CockroachDB node is running. -`{port}` | The port at which the CockroachDB node is listening. -`{database}` | The name of the (existing) database. -`{root-cert}` | The path to the root certificate.
    You can generate this certificate with [`cockroach cert create-ca`]({% link {{ page.version.version }}/cockroach-cert.md %}#subcommands), or you can use a [custom CA cert]({% link {{ page.version.version }}/create-security-certificates-custom-ca.md %}). -`{client-cert}` | The path to the [client certificate]({% link {{ page.version.version }}/cockroach-cert.md %}#client-key-and-certificates) for the user connecting to the cluster.
    You can generate this certificate with [`cockroach cert create-client`]({% link {{ page.version.version }}/cockroach-cert.md %}#subcommands). -`{client-key}` | The path to the [client key]({% link {{ page.version.version }}/cockroach-cert.md %}#client-key-and-certificates) for the user connecting to the cluster.
    You can generate this key with [`cockroach cert create-client`]({% link {{ page.version.version }}/cockroach-cert.md %}#subcommands). - -{% include {{ page.version.version }}/connect/core-note.md %} - -
    - -
    - -
    - -
    - -To connect to CockroachDB with the [JDBC](https://jdbc.postgresql.org) driver, create a `DataSource` object ([`PGSimpleDataSource` or `PGPoolingDataSource`](https://jdbc.postgresql.org/documentation/datasource/#applications-datasource)), and set the connection string with the `setUrl` class method. - -For example: - -{% include_cached copy-clipboard.html %} -~~~ java -PGSimpleDataSource ds = new PGSimpleDataSource(); -ds.setUrl(System.getenv("JDBC_DATABASE_URL")); -~~~ - -Where `JDBC_DATABASE_URL` is an environment variable set to a valid JDBC-compatible CockroachDB connection string. - -JDBC accepts the following format for CockroachDB connection strings: - -
    - -{% include_cached copy-clipboard.html %} -~~~ -jdbc:postgresql://{host}:{port}/{database}?password={password}&sslmode=verify-full&user={username} -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -jdbc:postgresql://{host}:{port}/{database}?user={username}&password={password}&sslmode=verify-full&sslrootcert={root-cert} -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -jdbc:postgresql://{host}:{port}/{database}?user={username}&sslmode=verify-full&sslrootcert={root-cert}&sslcert={client-cert}&sslkey={client-key} -~~~ - -{% include {{ page.version.version }}/connect/core-note.md %} - -
    - -For more information about connecting with JDBC, see the [official JDBC documentation](https://jdbc.postgresql.org/documentation/). - -
    - -
    - -To connect to CockroachDB with [Hibernate](https://hibernate.org/orm) ORM, set the object's `hibernate.connection.url` property to a valid CockroachDB connection string. - -For example, if you are bootstrapping your application with a [`ServiceRegistry`](https://docs.jboss.org/hibernate/orm/current/userguide/html_single/Hibernate_User_Guide.html#bootstrap-native-registry) object, first read from the Hibernate configuration file, and then set the `hibernate.connection.url` with the `applySetting` class method: - -{% include_cached copy-clipboard.html %} -~~~ java -StandardServiceRegistry standardRegistry = new StandardServiceRegistryBuilder() - .configure( "hibernate.cfg.xml" ).applySetting("hibernate.connection.url", System.getenv("DATABASE_URL")) - .build(); - -Metadata metadata = new MetadataSources( standardRegistry ) - .getMetadataBuilder() - .build(); - -SessionFactory sessionFactory = metadata.getSessionFactoryBuilder() - .build(); -~~~ - -Where `DATABASE_URL` is an environment variable set to a valid CockroachDB connection string. - -Hibernate accepts the following format for CockroachDB connection strings: - -
    - -{% include_cached copy-clipboard.html %} -~~~ -jdbc:postgresql://{host}:{port}/{database}?password={password}&sslmode=verify-full&user={username} -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -jdbc:postgresql://{host}:{port}/{database}?user={username}&password={password}&sslmode=verify-full&sslrootcert={root-cert} -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -jdbc:postgresql://{host}:{port}/{database}?user={username}&sslmode=verify-full&sslrootcert={root-cert}&sslcert={client-cert}&sslkey={client-key} -~~~ - -{% include {{ page.version.version }}/connect/core-note.md %} - -
    - -{{site.data.alerts.callout_info}} -To connect to CockroachDB with Hibernate, you must specify the [CockroachDB Hibernate dialect]({% link {{ page.version.version }}/install-client-drivers.md %}?filters=java#hibernate) in your `hibernate.cfg.xml` configuration file. -{{site.data.alerts.end}} - -For more information about connecting with Hibernate, see the [official Hibernate documentation](https://hibernate.org/orm/documentation). - -
    - -## Connection parameters - -
    - -Parameter | Description -----------|------------ -`{username}` | The [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users) connecting to the cluster. -`{password}` | The password for the SQL user connecting to the cluster. -`{host}` | The host on which the CockroachDB node is running. -`{port}` | The port at which the CockroachDB node is listening. -`{database}` | The name of the (existing) database. - -
    - -
    - -Parameter | Description -----------|------------ -`{username}` | The [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users) connecting to the cluster. -`{password}` | The password for the SQL user connecting to the cluster. -`{host}` | The host on which the CockroachDB node is running. -`{port}` | The port at which the CockroachDB node is listening. -`{database}` | The name of the (existing) database. -`{root-cert}` | The [URL-encoded](https://wikipedia.org/wiki/Percent-encoding) path to the root certificate that you [downloaded from the CockroachDB Cloud Console]({% link cockroachcloud/authentication.md %}#node-identity-verification). - -
    - -
    - -Parameter | Description -----------|------------ -`{username}` | The [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users) connecting to the cluster. -`{host}` | The host on which the CockroachDB node is running. -`{port}` | The port at which the CockroachDB node is listening. -`{database}` | The name of the (existing) database. -`{root-cert}` | The [URL-encoded](https://wikipedia.org/wiki/Percent-encoding) path to the root certificate.
    You can generate this certificate with [`cockroach cert create-ca`]({% link {{ page.version.version }}/cockroach-cert.md %}#subcommands), or you can use a [custom CA cert]({% link {{ page.version.version }}/create-security-certificates-custom-ca.md %}). -`{client-cert}` | The [URL-encoded](https://wikipedia.org/wiki/Percent-encoding) path to the [client certificate]({% link {{ page.version.version }}/cockroach-cert.md %}#client-key-and-certificates) for the user connecting to the cluster.
    You can generate this certificate with [`cockroach cert create-client`]({% link {{ page.version.version }}/cockroach-cert.md %}#subcommands). -`{client-key}` | The [URL-encoded](https://wikipedia.org/wiki/Percent-encoding) path to the [PKCS#8](https://tools.ietf.org/html/rfc5208)-formatted [client key]({% link {{ page.version.version }}/cockroach-cert.md %}#client-key-and-certificates) for the user connecting to the cluster.
    You can generate this key with [`cockroach cert create-client --also-generate-pkcs8-key`]({% link {{ page.version.version }}/cockroach-cert.md %}#subcommands). - -{% include {{ page.version.version }}/connect/core-note.md %} - -
    - -
    - -
    - -{{site.data.alerts.callout_info}} -To connect to a CockroachDB {{ site.data.products.basic }} or {{ site.data.products.standard }} cluster from a Ruby application, you must have a valid CA certificate located at `~/.postgresql/root.crt`.For instructions on downloading a CA certificate from the CockroachDB {{ site.data.products.cloud }} Console, see [Connect to a CockroachDB {{ site.data.products.basic }} Cluster]({% link cockroachcloud/connect-to-a-basic-cluster.md %}) or [Connect to a CockroachDB {{ site.data.products.standard }} Cluster]({% link cockroachcloud/connect-to-your-cluster.md %}). -{{site.data.alerts.end}} - -
    - -To connect to CockroachDB with the [Ruby pg](https://rubygems.org/gems/pg) driver, use the `PG.connect` function. - -For example: - -{% include_cached copy-clipboard.html %} -~~~ ruby -#!/usr/bin/env ruby - -require 'pg' - -conn = PG.connect(ENV['DATABASE_URL']) -~~~ - -Where `DATABASE_URL` is an environment variable set to a valid CockroachDB connection string. - -pg accepts the following format for CockroachDB connection strings: - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://{username}:{password}@{host}:{port}/{database}?sslmode=verify-full -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://{username}:{password}@{host}:{port}/{database}?sslmode=verify-full&sslrootcert={root-cert} -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -postgresql://{username}@{host}:{port}/{database}?sslmode=verify-full&sslrootcert={root-cert}&sslcert={client-cert}&sslkey={client-key} -~~~ - -{% include {{ page.version.version }}/connect/core-note.md %} - -
    - -For more information about connecting with pg, see the [official pg documentation](https://www.rubydoc.info/gems/pg). - -
    - -
    - -
    - -To connect to CockroachDB with [Active Record](https://github.com/rails/rails/tree/main/activerecord) from a Rails app, update the database configuration in `config/database.yml`: - -~~~ yaml -default: &default - adapter: cockroachdb - url: <%= ENV['DATABASE_URL'] %> - -... -~~~ - -Where `DATABASE_URL` is an environment variable set to a valid CockroachDB connection string. - -Active Record accepts the following format for CockroachDB connection strings: - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -cockroachdb://{username}:{password}@{host}:{port}/{database}?sslmode=verify-full -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -cockroachdb://{username}:{password}@{host}:{port}/{database}?sslmode=verify-full&sslrootcert={root-cert} -~~~ - -
    - -
    - -{% include_cached copy-clipboard.html %} -~~~ -cockroachdb://{username}@{host}:{port}/{database}?sslmode=verify-full&sslrootcert={root-cert}&sslcert={client-cert}&sslkey={client-key} -~~~ - -{% include {{ page.version.version }}/connect/core-note.md %} - -
    - -{{site.data.alerts.callout_info}} -To connect to CockroachDB with Active Record, you must install the [Active Record CockroachDB adapter](https://rubygems.org/gems/activerecord-cockroachdb-adapter). -{{site.data.alerts.end}} - -For more information about connecting with Active Record, see the [official Active Record documentation](https://guides.rubyonrails.org/active_record_querying.html). - -
    - -## Connection parameters - -
    - -Parameter | Description -----------|------------ -`{username}` | The [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users) connecting to the cluster. -`{password}` | The password for the SQL user connecting to the cluster. -`{host}` | The host on which the CockroachDB node is running. -`{port}` | The port at which the CockroachDB node is listening. -`{database}` | The name of the (existing) database. - -
    - -
    - -Parameter | Description -----------|------------ -`{username}` | The [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users) connecting to the cluster. -`{password}` | The password for the SQL user connecting to the cluster. -`{host}` | The host on which the CockroachDB node is running. -`{port}` | The port at which the CockroachDB node is listening. -`{database}` | The name of the (existing) database. -`{root-cert}` | The path to the root certificate that you [downloaded from the CockroachDB Cloud Console]({% link cockroachcloud/authentication.md %}#node-identity-verification). - -
    - -
    - -Parameter | Description -----------|------------ -`{username}` | The [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users) connecting to the cluster. -`{host}` | The host on which the CockroachDB node is running. -`{port}` | The port at which the CockroachDB node is listening. -`{database}` | The name of the (existing) database. -`{root-cert}` | The path to the root certificate.
    You can generate this certificate with [`cockroach cert create-ca`]({% link {{ page.version.version }}/cockroach-cert.md %}#subcommands), or you can use a [custom CA cert]({% link {{ page.version.version }}/create-security-certificates-custom-ca.md %}). -`{client-cert}` | The path to the [client certificate]({% link {{ page.version.version }}/cockroach-cert.md %}#client-key-and-certificates) for the user connecting to the cluster.
    You can generate this certificate with [`cockroach cert create-client`]({% link {{ page.version.version }}/cockroach-cert.md %}#subcommands). -`{client-key}` | The path to the [client key]({% link {{ page.version.version }}/cockroach-cert.md %}#client-key-and-certificates) for the user connecting to the cluster.
    You can generate this key with [`cockroach cert create-client`]({% link {{ page.version.version }}/cockroach-cert.md %}#subcommands). - -{% include {{ page.version.version }}/connect/core-note.md %} - -
    - -
    - -## See also - -- [Install a Driver or ORM Framework]({% link {{ page.version.version }}/install-client-drivers.md %}) -- [Connection Pooling]({% link {{ page.version.version }}/connection-pooling.md %}) -- [`cockroach` Connection Parameters]({% link {{ page.version.version }}/connection-parameters.md %}) -- [Example Apps]({% link {{ page.version.version }}/example-apps.md %}) diff --git a/src/current/v22.1/connection-parameters.md b/src/current/v22.1/connection-parameters.md deleted file mode 100644 index 44fad3d4c17..00000000000 --- a/src/current/v22.1/connection-parameters.md +++ /dev/null @@ -1,300 +0,0 @@ ---- -title: Client Connection Parameters -summary: This page describes the parameters used to establish a client connection. -toc: true -docs_area: reference.cli ---- - -Client applications, including [`cockroach` client -commands](cockroach-commands.html), work by establishing a network -connection to a CockroachDB cluster. The client connection parameters -determine which CockroachDB cluster they connect to, and how to -establish this network connection. - -## Supported connection parameters - -Most client apps, including `cockroach` client commands, determine -which CockroachDB server to connect to using a [PostgreSQL connection -URL](#connect-using-a-url). When using a URL, a client can also -specify additional SQL-level parameters. This mode provides the most -configuration flexibility. - -In addition, all `cockroach` client commands also accept [discrete -connection parameters](#connect-using-discrete-parameters) that can -specify the connection parameters separately from a URL. - -## When to use a URL and when to use discrete parameters - -Specifying client parameters using a URL may be more convenient during -experimentation, as it facilitates copy-pasting the connection -parameters (the URL) between different tools: the output of `cockroach -start`, other `cockroach` commands, GUI database visualizer, -programming tools, etc. - -Discrete parameters may be more convenient in automation, where the -components of the configuration are filled in separately from -different variables in a script or a service manager. - -## Connect using a URL - -A connection URL has the following format: - -{% include_cached copy-clipboard.html %} -~~~ -postgres://:@:/? -~~~ - -`cockroach` client commands also support [UNIX domain socket URIs](https://wikipedia.org/wiki/Unix_domain_socket) of the following form: - -{% include_cached copy-clipboard.html %} -~~~ -postgres://:@?host=&port=& -~~~ - -Component | Description | Required -----------|-------------|--------- -`` | The [SQL user](create-user.html) that will own the client session. | ✗ -`` | The user's password. It is not recommended to pass the password in the URL directly.

    Note that passwords with special characters must be passed as [query string parameters](#additional-connection-parameters) (e.g., `postgres://maxroach@localhost:26257/movr?password=`) and not as a component in the connection URL (e.g., `postgres://maxroach:@localhost:26257/movr`).

    [Find more detail about how CockroachDB handles passwords.](authentication.html#client-authentication) | ✗ -`` | The host name or address of a CockroachDB node or load balancer. | Required by most client drivers. -`` | The port number of the SQL interface of the CockroachDB node or load balancer. The default port number for CockroachDB is 26257. Use this value when in doubt. | Required by most client drivers. -`` | A database name to use as [current database](sql-name-resolution.html#current-database). Defaults to `defaultdb` when using `cockroach` client commands. Drivers and ORMs may have different defaults. | ✗ -`` | The directory path to the client listening for a socket connection. | Required when specifying a Unix domain socket URI. -`` | [Additional connection parameters](#additional-connection-parameters), including SSL/TLS certificate settings. | ✗ - -{{site.data.alerts.callout_info}} -For `cockroach` commands that accept a URL, you can specify the URL with the command-line flag `--url`. -If `--url` is not specified but -the environment variable `COCKROACH_URL` is defined, the environment -variable is used. Otherwise, the `cockroach` command will use -[discrete connection parameters](#connect-using-discrete-parameters) -as described below. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -The `` part is not used for [`cockroach` -commands](cockroach-commands.html) other than [`cockroach -sql`](cockroach-sql.html). A warning -is currently printed if it is mistakenly specified, and -future versions of CockroachDB may return an error in that case. -{{site.data.alerts.end}} - -### Additional connection parameters - -The following additional parameters can be passed after the `?` character in the URL. After the first parameter is specified, any additional parameters must be separated by an ampersand (`&`). - -Parameter | Description | Default value -----------|-------------|--------------- -`application_name` | An initial value for the [`application_name` session variable](set-vars.html).

    Note: For [Java JDBC](build-a-java-app-with-cockroachdb.html), use `ApplicationName`. | Empty string. -`sslmode` | Which type of secure connection to use: `disable`, `allow`, `prefer`, `require`, `verify-ca` or `verify-full`. See [Secure Connections With URLs](#secure-connections-with-urls) for details. | `disable` -`sslrootcert` | Path to the [CA certificate](cockroach-cert.html), when `sslmode` is not `disable`. | Empty string. -`sslcert` | Path to the [client certificate](cockroach-cert.html), when `sslmode` is not `disable`. | Empty string. -`sslkey` | Path to the [client private key](cockroach-cert.html), when `sslmode` is not `disable`. | Empty string. -`password` | The SQL user's password. It is not recommended to pass the password in the URL directly.

    Note that passwords with special characters must be passed as [query string parameters](#additional-connection-parameters) (e.g., `postgres://maxroach@localhost:26257/movr?password=`) and not as a component in the connection URL (e.g., `postgres://maxroach:@localhost:26257/movr`). | Empty string -`options` | [Additional options](#supported-options-parameters) to be passed to the server. | Empty string - -#### Supported `options` parameters - -CockroachDB supports the following `options` parameters. After the first `options` parameter is specified, any additional parameters in the same connection string must be separated by a space. - -Parameter | Description -----------|------------- -`--cluster=` | Identifies your tenant cluster on a multi-tenant host. For example, `funny-skunk-123`. This option is deprecated. The `host` in the connection string now includes the tenant information. -`-c =` | Sets a [session variable](set-vars.html) for the SQL session. - -{{site.data.alerts.callout_info}} -Note that some drivers require certain characters to be properly encoded in URL connection strings. For example, spaces in [a JDBC connection string](https://jdbc.postgresql.org/documentation/use/#connection-parameters) must specified as `%20`. -{{site.data.alerts.end}} - -### Secure connections with URLs - -The following values are supported for `sslmode`, although only the first and the last are recommended for use. - -Parameter | Description | Recommended for use -----------|-------------|-------------------- -`sslmode=disable` | Do not use an encrypted, secure connection at all. | Use during development. -`sslmode=allow` | Enable a secure connection only if the server requires it.

    **Not supported in all clients.** | -`sslmode=prefer` | Try to establish a secure connection, but accept an insecure connection if the server does not support secure connections.

    **Not supported in all clients.** | -`sslmode=require` | Force a secure connection. An error occurs if the secure connection cannot be established. | -`sslmode=verify-ca` | Force a secure connection and verify that the server certificate is signed by a known CA. | -`sslmode=verify-full` | Force a secure connection, verify that the server certificate is signed by a known CA, and verify that the server address matches that specified in the certificate. | Use for [secure deployments](secure-a-cluster.html). - -{{site.data.alerts.callout_danger}} -Some client drivers and the `cockroach` commands do not support -`sslmode=allow` and `sslmode=prefer`. Check the documentation of your -SQL driver to determine whether these options are supported. -{{site.data.alerts.end}} - -### Convert a URL for different drivers - - The subcommand `cockroach convert-url` converts a connection URL, such as those printed out by [`cockroach start`](cockroach-start.html) or included in the online documentation, to the syntax recognized by various [client drivers](third-party-database-tools.html#drivers). For example: - -{% include_cached copy-clipboard.html %} -~~~ -$ ./cockroach convert-url --url "postgres://foo/bar" -~~~ - -~~~ -# Connection URL for libpq (C/C++), psycopg (Python), lib/pq & pgx (Go),node-postgres (JS) -and most pq-compatible drivers: - postgresql://root@foo:26257/bar -# Connection DSN (Data Source Name) for Postgres drivers that accept DSNs - most drivers -and also ODBC: - database=bar user=root host=foo port=26257 -# Connection URL for JDBC (Java and JVM-based languages): - jdbc:postgresql://foo:26257/bar?user=root -~~~ - -### Example URL for an insecure connection - -The following URL is suitable to connect to a CockroachDB node using an insecure connection: - -{% include_cached copy-clipboard.html %} -~~~ -postgres://root@servername:26257/mydb?sslmode=disable -~~~ - -This specifies a connection for the `root` user to server `servername` -on port 26257 (the default CockroachDB SQL port), with `mydb` set as -current database. `sslmode=disable` makes the connection insecure. - -### Example URL for a secure connection - -The following URL is suitable to connect to a CockroachDB node using a secure connection: - -{% include_cached copy-clipboard.html %} -~~~ -postgres://root@servername:26257/mydb?sslmode=verify-full&sslrootcert=path/to/ca.crt&sslcert=path/to/client.username.crt&sslkey=path/to/client.username.key -~~~ - -This uses the following components: - -- User `root` -- Host name `servername`, port number 26257 (the default CockroachDB SQL port) -- Current database `mydb` -- SSL/TLS mode `verify-full`: - - Root CA certificate `path/to/ca.crt` - - Client certificate `path/to/client.username.crt` - - Client key `path/to/client.username.key` - -For details about how to create and manage SSL/TLS certificates, see -[Create Security Certificates](cockroach-cert.html) and -[Rotate Certificates](rotate-certificates.html). - -### Example URI for a Unix domain socket - -The following URI is suitable to connect to a CockroachDB cluster listening for Unix domain socket connections at `/path/to/client`: - -{% include_cached copy-clipboard.html %} -~~~ -postgres://root@?host=/path/to/client&port=26257 -~~~ - -This specifies a connection for the `root` user to an insecure cluster listening for a socket connection (e.g., a cluster started with the [`--socket-dir` flag]({% link {{ page.version.version }}/cockroach-start.md %}#networking)) at `/path/to/client`, and on port 26257. - -### Example URI for connecting to a database with a user-defined schema - -The following URI connects to a CockroachDB cluster with a user-defined schema named `max_schema` in the `movr` database using the [`options` parameter](#supported-options-parameters). - -{% include_cached copy-clipboard.html %} -~~~ -postgres://maxroach@db.example.com:26257/movr?sslmode=verify-full&options%3D-c%20search_path%3Dmax_schema -~~~ - -{{site.data.alerts.callout_info}} -The `options=-c search_path=max_schema` parameter is URL-encoded in the example above. -{{site.data.alerts.end}} - -## Connect using discrete parameters - -Most [`cockroach` commands](cockroach-commands.html) accept connection -parameters as separate, discrete command-line flags, in addition (or -in replacement) to `--url` which [specifies all parameters as a -URL](#connect-using-a-url). - -For each command-line flag that directs a connection parameter, -CockroachDB also recognizes an environment variable. The environment -variable is used when the command-line flag is not specified. - -{% include {{ page.version.version }}/sql/connection-parameters.md %} - -### Example command-line flags for an insecure connection - -The following command-line flags establish an insecure connection: - -{% include_cached copy-clipboard.html %} -~~~ ---user=root \ ---host= ---insecure -~~~ - -This specifies a connection for the `root` user to server `servername` -on port 26257 (the default CockroachDB SQL port). `--insecure` makes -the connection insecure. - -### Example command-line flags for a secure connection - -The following command-line flags establish a secure connection: - -{% include_cached copy-clipboard.html %} -~~~ ---user=root \ ---host= ---certs-dir=path/to/certs -~~~ - -This uses the following components: - -- User `root` -- Host name `servername`, port number 26257 (the default CockroachDB SQL port) -- SSL/TLS enabled, with settings: - - Root CA certificate `path/to/certs/ca.crt` - - Client certificate `path/to/client..crt` (`path/to/certs/client.root.crt` with `--user root`) - - Client key `path/to/client..key` (`path/to/certs/client.root.key` with `--user root`) - -{{site.data.alerts.callout_info}} -When using discrete connection parameters, the file names of the CA -and client certificates and client key are derived automatically from -the value of `--certs-dir`. -{{site.data.alerts.end}} - -## Using both URL and client parameters - -Most `cockroach` commands accept both a URL and client parameters. -The information contained therein is combined in the order it appears -in the command line. - -This combination is useful so that discrete command-line flags can -override settings not otherwise set in the URL. - -### Example override of the current database - -The `cockroach start` command prints out the following connection URL, which connects to the `defaultdb` database: - -{% include_cached copy-clipboard.html %} -~~~ -postgres://root@servername:26257/?sslmode=disable -~~~ - -To specify `mydb` as the current database using [`cockroach sql`](cockroach-sql.html), run the following command: - -{% include_cached copy-clipboard.html %} -~~~ -cockroach sql \ ---url "postgres://root@servername:26257/?sslmode=disable" \ ---database mydb -~~~ - -This is equivalent to: - -{% include_cached copy-clipboard.html %} -~~~ -cockroach sql --url "postgres://root@servername:26257/mydb?sslmode=disable" -~~~ - -## See also - -- [`cockroach` Commands Overview](cockroach-commands.html) -- [Create Security Certificates](cockroach-cert.html) -- [Secure a Cluster](secure-a-cluster.html) -- [Create and Manage Users](security-reference/authorization.html#create-and-manage-users) diff --git a/src/current/v22.1/connection-pooling.md b/src/current/v22.1/connection-pooling.md deleted file mode 100644 index 15462e34917..00000000000 --- a/src/current/v22.1/connection-pooling.md +++ /dev/null @@ -1,134 +0,0 @@ ---- -title: Connection Pooling -summary: How to plan, configure, and use connection pools when using drivers or frameworks with CockroachDB. -toc: true -docs_area: develop ---- - -This page has information on planning, configuring, and using connection pools when using drivers or frameworks with CockroachDB. - -{% include {{ page.version.version }}/prod-deployment/terminology-vcpu.md %} - -## About connection pools - -A typical database operation consists of several steps. - -1. The driver uses a configuration to start a connection to the database server. -1. A network socket is opened on the client that connects to the database server. -1. Data is read or written through the network socket. -1. The connection is closed down. -1. The network socket is closed down and its resources are freed. - -For simple database operations, these steps are not expensive, but as an application scales up, the performance of the application will suffer as each connection is created and destroyed. One pattern for improving performance is a connection pool, a group of already configured and established network connections between the client and database that can be reused for data operations within an application. - -Each time an application reads or writes data, it will request one of the connections in the pool. After the data operation completes, the connection is returned to the pool so other operations can use it. - -Connection pooling can be a enabled as a feature of the driver, a separate library used in conjunction with a driver, a feature of an application server, or a proxy server that acts as a gateway to the database server. - -{{site.data.alerts.callout_success}} -To read more about connection pooling, see our [What is Connection Pooling, and Why Should You Care](https://www.cockroachlabs.com/blog/what-is-connection-pooling/) blog post. -{{site.data.alerts.end}} - -## Sizing connection pools - -Idle connections in CockroachDB do not consume many resources compared to PostgreSQL. Cockroach Labs estimates the memory overhead of idle connections in CockroachDB is 20 kB to 30 kB per connection. - -Creating the appropriate size pool of connections is critical to gaining maximum performance in an application. Too few connections in the pool will result in high latency as each operation waits for a connection to open up. But adding too many connections to the pool can also result in high latency as each connection thread is being run in parallel by the system. The time it takes for many threads to complete in parallel is typically higher than the time it takes a smaller number of threads to run sequentially. - -Each processor core can only execute one thread at a time. When there are more threads than processor cores, the system will use context switching to [time-slice](https://en.wikipedia.org/wiki/Preemption_(computing)#Time_slice) the thread execution. For example, if you have a system with a single core and two threads, processing threads 1 and 2 in parallel results in the system context switching to pause execution of thread 1 and begin executing thread 2, and then pause execution of thread 2 to resume executing thread 1. Executing thread 1 completely and then executing thread 2 will be faster because the system doesn't need to context switch, even though thread 2 had to wait until thread 1 fully completed to begin executing. - -Storage and network performance also will affect the ability of a thread to fully execute. If a thread is blocked by network or storage latency, adding connections to the pool is a good idea so other threads can execute while the original thread is being blocked. - -Cockroach Labs performed lab testing of various customer workloads and found no improvement in scalability beyond: - -**connections = (number of cores * 4)** - -Many workloads perform best when the maximum number of active connections is between 2 and 4 times the number of CPU cores in the cluster. - -If you have a large number of services connecting to the same cluster, make sure the number of concurrent active connections across all the services does not exceed this recommendation by a large amount. If each service has its own connection pool, then you should make sure the sum of all the pool sizes is close to our maximum connections recommendation. Each workload and application is different, so you should conduct testing to determine the best-performing pool sizes for each service in your architecture. - -In addition to setting a maximum connection pool size, set the maximum number of idle connections if possible. Cockroach Labs recommends setting the maximum number of idle connections to the maximum pool size. While this uses more memory, it allows many connections when concurrency is high without having to create a new connection for every new operation. - -{% include {{page.version.version}}/sql/server-side-connection-limit.md %} This may be useful in addition to your connection pool settings. - -## Validating connections in a pool - -After a connection pool initializes connections to CockroachDB clusters, those connections can occasionally break. This could be due to changes in the cluster topography, or rolling upgrades and restarts, or network disruptions. CockroachDB {{ site.data.products.cloud }} clusters periodically are restarted for patch version upgrades, for example, so previously established connections would be invalid after the restart. - -Validating connections is typically handled automatically by the connection pool. For example, in HikariCP the connection is validated whenever you request a connection from the pool, and the `keepaliveTime` property allows you to configure an interval to periodically check if the connections in the pool are valid. Whatever connection pool you use, make sure connection validation is enabled when running your application. - -## Example - -
    - - -
    - -
    - -In this example, a Java application similar to the [basic JDBC example](build-a-java-app-with-cockroachdb.html) uses the [PostgreSQL JDBC driver](https://jdbc.postgresql.org/) and [HikariCP](https://github.com/brettwooldridge/HikariCP) as the connection pool layer to connect to a CockroachDB cluster. The database is being run on 10 cores across the cluster. - -Using the connection pool formula above: - -**connections = (10 [processor cores] * 4)** - -The connection pool size should be 40. - -{% include_cached copy-clipboard.html %} -~~~ java -HikariConfig config = new HikariConfig(); -config.setJdbcUrl("jdbc:postgresql://localhost:26257/bank"); -config.setUsername("maxroach"); -config.setPassword("password"); -config.addDataSourceProperty("ssl", "true"); -config.addDataSourceProperty("sslmode", "require"); -config.addDataSourceProperty("reWriteBatchedInserts", "true"); -config.setAutoCommit(false); -config.setMaximumPoolSize(40); -config.setKeepaliveTime(150000); - -HikariDataSource ds = new HikariDataSource(config); - -Connection conn = ds.getConnection(); -~~~ - -
    - -
    - -In this example, a Go application similar to the [basic pgx example](build-a-go-app-with-cockroachdb.html) uses the [pgxpool library](https://pkg.go.dev/github.com/jackc/pgx/v4/pgxpool) to create a connection pool on a CockroachDB cluster. The database is being run on 10 cores across the cluster. - -Using the connection pool formula above: - -**connections = (10 [processor cores] * 4)** - -The connection pool size should be 40. - -~~~ go -// Set connection pool configuration, with maximum connection pool size. -config, err := pgxpool.ParseConfig("postgres://max:roach@127.0.0.1:26257/bank?sslmode=require&pool_max_conns=40") - if err != nil { - log.Fatal("error configuring the database: ", err) - } - -// Create a connection pool to the "bank" database. -dbpool, err := pgxpool.ConnectConfig(context.Background(), config) -if err != nil { - log.Fatal("error connecting to the database: ", err) -} -defer dbpool.Close() -~~~ - -This example uses the `pool_max_conns` parameter to set the maximum number of connections in the connection pool to 30. - -For a full list of connection pool configuration parameters for pgxpool, see [the pgxpool documentation](https://pkg.go.dev/github.com/jackc/pgx/v4/pgxpool#Config). - -
    - -## Implementing connection retry logic - -Some operational processes involve [node shutdown](node-shutdown.html). During the shutdown sequence, the server forcibly closes all SQL client connections to the node. If any open transactions were interrupted or not admitted by the server because of the connection closure, they will fail with a connection error. - -To be resilient to connection closures, your application should use a retry loop to reissue transactions that were open when a connection was closed. This allows procedures such as [rolling upgrades](upgrade-cockroach-version.html) to complete without interrupting your service. For details, see [Connection retry loop](node-shutdown.html#connection-retry-loop). - -If you cannot tolerate connection errors during node drain, you can change the `server.shutdown.connection_wait` [cluster setting](cluster-settings.html) to allow SQL client connections to gracefully close before CockroachDB forcibly closes them. For guidance, see [Node Shutdown](node-shutdown.html#server-shutdown-connection_wait). diff --git a/src/current/v22.1/constraints.md b/src/current/v22.1/constraints.md deleted file mode 100644 index 85112293610..00000000000 --- a/src/current/v22.1/constraints.md +++ /dev/null @@ -1,126 +0,0 @@ ---- -title: Constraints -summary: Constraints offer additional data integrity by enforcing conditions on the data within a column. -toc: true -docs_area: reference.sql ---- - -Constraints offer additional data integrity by enforcing conditions on the data within a column. Whenever values are manipulated (inserted, deleted, or updated), constraints are checked and modifications that violate constraints are rejected. - -For example, the `UNIQUE` constraint requires that all values in a column be unique from one another (except *NULL* values). If you attempt to write a duplicate value, the constraint rejects the entire statement. - - -## Supported constraints - - Constraint | Description -------------|------------- - [`CHECK`](check.html) | Values must return `TRUE` or `NULL` for a Boolean expression. - [`DEFAULT` value](default-value.html) | If a value is not defined for the constrained column in an `INSERT` statement, the `DEFAULT` value is written to the column. - [`FOREIGN KEY`](foreign-key.html) | Values must exactly match existing values from the column it references. - [`NOT NULL`](not-null.html) | Values may not be *NULL*. - [`PRIMARY KEY`](primary-key.html) | Values must uniquely identify each row *(one per table)*. This behaves as if the `NOT NULL` and `UNIQUE` constraints are applied, as well as automatically creates an [index](indexes.html) for the table using the constrained columns. - [`UNIQUE`](unique.html) | Each non-*NULL* value must be unique. This also automatically creates an [index](indexes.html) for the table using the constrained columns. - -## Using constraints - -### Add constraints - -How you add constraints depends on the number of columns you want to constrain, as well as whether or not the table is new. - -- **One column of a new table** has its constraints defined after the column's data type. For example, this statement applies the `PRIMARY KEY` constraint to `foo.a`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE foo (a INT PRIMARY KEY); - ~~~ -- **Multiple columns of a new table** have their constraints defined after the table's columns. For example, this statement applies the `PRIMARY KEY` constraint to `foo`'s columns `a` and `b`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE bar (a INT, b INT, PRIMARY KEY (a,b)); - ~~~ - - {{site.data.alerts.callout_info}} - The `DEFAULT` and `NOT NULL` constraints cannot be applied to multiple columns. - {{site.data.alerts.end}} - -- **Existing tables** can have the following constraints added: - - `CHECK`, `FOREIGN KEY`, and `UNIQUE` constraints can be added through [`ALTER TABLE...ADD CONSTRAINT`](add-constraint.html). For example, this statement adds the `UNIQUE` constraint to `baz.id`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER TABLE baz ADD CONSTRAINT id_unique UNIQUE (id); - ~~~ - - - `DEFAULT` values and `NOT NULL` constraints can be added through [`ALTER TABLE...ALTER COLUMN`](alter-column.html#set-or-change-a-default-value). For example, this statement adds the [`DEFAULT` value constraint](default-value.html) to `baz.bool`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER TABLE baz ALTER COLUMN bool SET DEFAULT true; - ~~~ - - - [`PRIMARY KEY`](primary-key.html) constraints can be added with [`ADD CONSTRAINT`](add-constraint.html)/[`ADD PRIMARY KEY`](alter-table.html) in the following circumstances: - - - A [`DROP CONSTRAINT`](drop-constraint.html) statement precedes the `ADD CONSTRAINT`/`ADD PRIMARY KEY` statement in the same transaction. For examples, see the [`ADD CONSTRAINT`](add-constraint.html#examples) and [`DROP CONSTRAINT`](drop-constraint.html#examples) pages. - - The current [primary key is on `rowid`](indexes.html#creation), the default primary key created if none is explicitly defined at table creation. - - The `ADD CONSTRAINT`/`ADD PRIMARY KEY` is in the same transaction as a `CREATE TABLE` statement with no primary key defined. - -#### Order of constraints - -The order in which you list constraints is not important because constraints are applied to every modification of their respective tables or columns. - -#### Name constraints on new tables - -You can name constraints applied to new tables using the `CONSTRAINT` clause before defining the constraint: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE foo (a INT CONSTRAINT another_name PRIMARY KEY); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE bar (a INT, b INT, CONSTRAINT yet_another_name PRIMARY KEY (a,b)); -~~~ - -### View constraints - -To view a table's constraints, use [`SHOW CONSTRAINTS`](show-constraints.html) or [`SHOW CREATE`](show-create.html). - -### Remove constraints - -The procedure for removing a constraint depends on its type: - -Constraint Type | Procedure ------------------|----------- -[`CHECK`](check.html) | Use [`DROP CONSTRAINT`](drop-constraint.html). -[`DEFAULT` value](default-value.html) | Use [`ALTER COLUMN`](alter-column.html#remove-default-constraint). -[`FOREIGN KEY`](foreign-key.html) | Use [`DROP CONSTRAINT`](drop-constraint.html). -[`NOT NULL`](not-null.html) | Use [`ALTER COLUMN`](alter-column.html#remove-not-null-constraint). -[`PRIMARY KEY`](primary-key.html) | Primary key constraints can be dropped with [`DROP CONSTRAINT`](drop-constraint.html) if an [`ADD CONSTRAINT`](add-constraint.html) statement follows the `DROP CONSTRAINT` statement in the same transaction. -[`UNIQUE`](unique.html) | The `UNIQUE` constraint cannot be dropped directly. To remove the constraint, [drop the index](drop-index.html) that was created by the constraint, e.g., `DROP INDEX my_unique_constraint`. - -### Change constraints - -The procedure for changing a constraint depends on its type: - -Constraint Type | Procedure ------------------|----------- -[`CHECK`](check.html) | [Issue a transaction](transactions.html#syntax) that adds a new `CHECK` constraint ([`ADD CONSTRAINT`](add-constraint.html)), and then remove the existing one ([`DROP CONSTRAINT`](drop-constraint.html)). -[`DEFAULT` value](default-value.html) | The `DEFAULT` value can be changed through [`ALTER COLUMN`](alter-column.html). -[`FOREIGN KEY`](foreign-key.html) | [Issue a transaction](transactions.html#syntax) that adds a new `FOREIGN KEY` constraint ([`ADD CONSTRAINT`](add-constraint.html)), and then remove the existing one ([`DROP CONSTRAINT`](drop-constraint.html)). -[`NOT NULL`](not-null.html) | The `NOT NULL` constraint cannot be changed, only added and removed with [`ALTER COLUMN`](alter-column.html). -[`PRIMARY KEY`](primary-key.html) | To change a primary key, use an [`ALTER TABLE ... ALTER PRIMARY KEY`](alter-primary-key.html) statement.

    When you change a primary key with [`ALTER PRIMARY KEY`](alter-primary-key.html), the old primary key index becomes a secondary index. If you do not want the old primary key to become a secondary index, use [`DROP CONSTRAINT`](drop-constraint.html)/[`ADD CONSTRAINT`](add-constraint.html) to change the primary key. -[`UNIQUE`](unique.html) | [Issue a transaction](transactions.html#syntax) that adds a new `UNIQUE` constraint ([`ADD CONSTRAINT`](add-constraint.html)), and then remove the existing one ([`DROP CONSTRAINT`](drop-constraint.html)). - - -## See also - -- [`CREATE TABLE`](create-table.html) -- [`ADD CONSTRAINT`](add-constraint.html) -- [`DROP CONSTRAINT`](drop-constraint.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) -- [`SHOW CREATE`](show-create.html) -- [`ALTER PRIMARY KEY`](alter-primary-key.html) -- [`ALTER TABLE`](alter-table.html) -- [`ALTER COLUMN`](alter-column.html) diff --git a/src/current/v22.1/copy-from.md b/src/current/v22.1/copy-from.md deleted file mode 100644 index d218b0aaa35..00000000000 --- a/src/current/v22.1/copy-from.md +++ /dev/null @@ -1,291 +0,0 @@ ---- -title: COPY FROM -summary: Copy data from a third-party client to CockroachDB. -toc: true -docs_area: reference.sql ---- - -The `COPY FROM` statement copies data from [`cockroach sql`](cockroach-sql.html) or other [third party clients](install-client-drivers.html) to tables in your cluster. - -{{site.data.alerts.callout_danger}} -By default, `COPY FROM` statements are segmented into batches of 100 rows. If any row encounters an error, only the rows that precede the failed row remain committed. - -If you need `COPY FROM` statements to commit atomically, issue the statements within an explicit transaction. -{{site.data.alerts.end}} - -## Syntax - -
    -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/copy_from.html %} -
    - -### Parameters - -Parameter | Description ------------|------------- -`table_name` | The name of the table to which to copy data. -`opt_column_list` | The column name, or list of column names, to which to copy data. -`WITH copy_options` | Optionally specify one or more [copy options](#options). - -### Options - -Option | Description ------------|------------- -`DELIMITER 'value'` | The value that delimits the rows of input data, passed as a string. -`NULL 'value'` | The string that represents a `NULL` value in the input data. -`BINARY` | Copy data from binary format. If `BINARY` is specified, no other format can be specified.
    If no format is specified, CockroachDB copies in plaintext format. -`CSV` | Copy data from CSV format. If `CSV` is specified, no other format can be specified.
    If no format is specified, CockroachDB copies in plaintext format. -`ESCAPE` | Specify an escape character for quoting the fields in CSV data. - -## Required privileges - -Only members of the `admin` role can run `COPY` statements. By default, the `root` user belongs to the `admin` role. - -## Known limitations - -### `COPY` syntax not supported by CockroachDB - -{% include {{page.version.version}}/known-limitations/copy-syntax.md %} - -## Examples - -To run the examples, use [`cockroach demo`](cockroach-demo.html) to start a temporary, in-memory cluster with the [`movr` database](movr.html) preloaded. - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach demo -~~~ - -### Copy tab delimited data - -In the SQL shell, run the following command to start copying data to the `users` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -COPY users FROM STDIN; -~~~ - -The following prompt should appear: - -~~~ -Enter data to be copied followed by a newline. -End with a backslash and a period on a line by itself, or an EOF signal. -~~~ - -Enter some tab-delimited data that you want copied to the `users` table. - -{{site.data.alerts.callout_info}} -You may need to edit the following rows after copying them to make sure the delimiters are tab characters. -{{site.data.alerts.end}} - -~~~ -8a3d70a3-d70a-4000-8000-00000000001d seattle Hannah '400 Broad St' 0987654321 -~~~ - -~~~ -9eb851eb-851e-4800-8000-00000000001e new york Carl '53 W 23rd St' 5678901234 -~~~ - -~~~ -\. -~~~ - -~~~ -COPY 2 -~~~ - -In the SQL shell, query the `users` table for the rows that you just inserted: - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT * FROM users WHERE id IN ('8a3d70a3-d70a-4000-8000-00000000001d', '9eb851eb-851e-4800-8000-00000000001e'); -~~~ - -~~~ - id | city | name | address | credit_card ---------------------------------------+----------+--------+----------------+------------- - 9eb851eb-851e-4800-8000-00000000001e | new york | Carl | '53 W 23rd St' | 5678901234 - 8a3d70a3-d70a-4000-8000-00000000001d | seattle | Hannah | '400 Broad St' | 0987654321 -(2 rows) -~~~ - -### Copy CSV delimited data - -You can copy CSV data into CockroachDB using the following methods: - -- [Copy CSV delimited data from stdin](#copy-csv-delimited-data-from-stdin) -- [Copy CSV delimited data from stdin with an escape character](#copy-csv-delimited-data-from-stdin-with-an-escape-character) -- [Copy CSV delimited data from stdin with hex encoded byte array data](#copy-csv-delimited-data-from-stdin-with-hex-encoded-byte-array-data) - -#### Copy CSV delimited data from stdin - -Run the following SQL statement to create a new table that you will load with CSV formatted data: - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE IF NOT EXISTS setecastronomy (name STRING, phrase STRING); -~~~ - -Run the following command to start copying data to the table: - -{% include_cached copy-clipboard.html %} -~~~ sql -COPY setecastronomy FROM STDIN WITH CSV; -~~~ - -You will see the following prompt: - -~~~ -Enter data to be copied followed by a newline. -End with a backslash and a period on a line by itself, or an EOF signal. -~~~ - -Enter the data, followed by a backslash and period on a line by itself: - -{% include_cached copy-clipboard.html %} -~~~ -"My name is Werner Brandes","My voice is my passport" -~~~ - -{% include_cached copy-clipboard.html %} -~~~ -\. -~~~ - -~~~ -COPY 1 -~~~ - -To view the data, enter the following query: - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT * FROM setecastronomy; -~~~ - -~~~ - name | phrase -----------------------------+------------------------------------ - My name is Werner Brandes | My voice is my passport -(1 row) -~~~ - -#### Copy CSV delimited data from stdin with an escape character - -Run the following SQL statement to create a new table that you will load with CSV formatted data: - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE IF NOT EXISTS setecastronomy (name STRING, phrase STRING); -~~~ - -To copy CSV data into CockroachDB and specify an escape character for quoting the fields, enter the following statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -COPY setecastronomy FROM STDIN WITH CSV DELIMITER ',' ESCAPE '\'; -~~~ - -You will see the following prompt: - -~~~ -Enter data to be copied followed by a newline. -End with a backslash and a period on a line by itself, or an EOF signal. -~~~ - -Enter the data, followed by a backslash and period on a line by itself: - -{% include_cached copy-clipboard.html %} -~~~ -"My name is Werner Brandes","\"My\" \"voice\" \"is\" \"my\" \"passport\"" -~~~ - -{% include_cached copy-clipboard.html %} -~~~ -\. -~~~ - -~~~ -COPY 1 -~~~ - -To view the data, enter the following query: - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT * FROM setecastronomy; -~~~ - -~~~ - name | phrase -----------------------------+------------------------------------ - My name is Werner Brandes | My voice is my passport - My name is Werner Brandes | "My" "voice" "is" "my" "passport" -(2 rows) -~~~ - -#### Copy CSV delimited data from stdin with hex encoded byte array data - -To copy CSV data into CockroachDB and specify that CockroachDB should ingest hex encoded byte array data, enter the following statements: - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE IF NOT EXISTS mybytes(a INT PRIMARY KEY, b BYTEA); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -set bytea_output = 'escape'; -~~~ - -To import the data, enter the following statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -COPY mybytes FROM STDIN WITH CSV; -~~~ - -Enter the data, followed by a backslash and period on a line by itself: - -{% include_cached copy-clipboard.html %} -~~~ -1,X'6869 -2,x'6869 -3,"\x6869" -4,\x6869 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ -\. -~~~ - -~~~ -COPY 4 -~~~ - -To view the data, enter the following query: - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT * FROM mybytes; -~~~ - -~~~ - a | b -----+--------- - 1 | X'6869 - 2 | x'6869 - 3 | hi - 4 | hi -(4 rows) -~~~ - -## See also - -- [`IMPORT`](import.html) -- [`IMPORT INTO`](import-into.html) -- [`EXPORT`](export.html) -- [Install a Driver or ORM Framework](install-client-drivers.html) -- [Migrate from PostgreSQL](migrate-from-postgres.html) -- [Migration Overview](migration-overview.html) diff --git a/src/current/v22.1/cost-based-optimizer.md b/src/current/v22.1/cost-based-optimizer.md deleted file mode 100644 index 5c0849fc24c..00000000000 --- a/src/current/v22.1/cost-based-optimizer.md +++ /dev/null @@ -1,398 +0,0 @@ ---- -title: Cost-Based Optimizer -summary: The cost-based optimizer seeks the lowest cost for a query, usually related to time. -toc: true -keywords: gin, gin index, gin indexes, inverted index, inverted indexes, accelerated index, accelerated indexes -docs_area: develop ---- - -The cost-based optimizer seeks the lowest cost for a query, usually related to time. - -## How is cost calculated? - -A given SQL query can have thousands of equivalent query plans with vastly different execution times. The cost-based optimizer enumerates these plans and chooses the lowest cost plan. - -Cost is roughly calculated by: - -- Estimating how much time each node in the query plan will use to process all results -- Modeling how data flows through the query plan - -The most important factor in determining the quality of a plan is cardinality (i.e., the number of rows); the fewer rows each SQL operator needs to process, the faster the query will run. - -## Table statistics - -The cost-based optimizer can often find more performant query plans if it has access to statistical data on the contents of your tables. This data needs to be generated from scratch for new tables, and regenerated periodically for existing tables. - -By default, CockroachDB automatically generates table statistics when tables are [created](create-table.html), and as they are [updated](update.html). It does this using a [background job](create-statistics.html#view-statistics-jobs) that automatically determines which columns to get statistics on — specifically, it chooses: - -- Columns that are part of the primary key or an index (in other words, all indexed columns). -- Up to 100 non-indexed columns. - -By default, CockroachDB also automatically collects [multi-column statistics](create-statistics.html#create-statistics-on-multiple-columns) on columns that prefix an index. - -{{site.data.alerts.callout_info}} -[Schema changes](online-schema-changes.html) trigger automatic statistics collection for the affected table(s). -{{site.data.alerts.end}} - -For best query performance, most users should leave automatic statistics enabled with the default settings. Advanced users can follow the steps provided in this section for performance tuning and troubleshooting. - -### Control statistics refresh rate - -Statistics are refreshed in the following cases: - -- When there are no statistics. -- When it's been a long time since the last refresh, where "long time" is defined according to a moving average of the time across the last several refreshes. -- After a successful [`IMPORT`](import.html) or [`RESTORE`](restore.html) into the table. -- After any schema change affecting the table. -- After each mutation operation ([`INSERT`](insert.html), [`UPDATE`](update.html), or [`DELETE`](delete.html)), the probability of a refresh is calculated using a formula that takes the [cluster settings](cluster-settings.html) shown in the following table as inputs. These settings define the target number of rows in a table that must be stale before statistics on that table are refreshed. Increasing either setting will reduce the frequency of refreshes. In particular, `min_stale_rows` impacts the frequency of refreshes for small tables, while `fraction_stale_rows` has more of an impact on larger tables. - - - | Setting | Default Value | Details | - |------------------------------------------------------+---------------+---------------------------------------------------------------------------------------| - | `sql.stats.automatic_collection.fraction_stale_rows` | 0.2 | Target fraction of stale rows per table that will trigger a statistics refresh. | - | `sql.stats.automatic_collection.min_stale_rows` | 500 | Target minimum number of stale rows per table that will trigger a statistics refresh. | - - {{site.data.alerts.callout_info}} - Because the formula for statistics refreshes is probabilistic, you will not see statistics update immediately after changing these settings, or immediately after exactly 500 rows have been updated. - {{site.data.alerts.end}} - -#### Small versus large table examples - -Suppose the [clusters settings](cluster-settings.html) `sql.stats.automatic_collection.fraction_stale_rows` and `sql.stats.automatic_collection.min_stale_rows` have the default values .2 and 500 as shown in the preceding table. - -If a table has 100 rows and 20 became stale, a re-collection would not be triggered because, even though 20% of the rows are stale, they do not meet the 500 row minimum. - -On the other hand, if a table has 1,500,000,000 rows, 20% of that, or 300,000,000 rows, would have to become stale before auto statistics collection was triggered. With a table this large, you would have to lower `sql.stats.automatic_collection.fraction_stale_rows` significantly to allow for regular stats collections. This can cause smaller tables to have stats collected much more frequently, because it is a global setting that affects automatic stats collection for all tables. - -In such cases we recommend that you use the [`sql_stats_automatic_collection_enabled` storage parameter](#enable-and-disable-automatic-statistics-collection-for-tables), which lets you configure auto statistics on a per-table basis. - -### Enable and disable automatic statistics collection for clusters - -Automatic statistics collection is enabled by default. To disable automatic statistics collection, follow these steps: - -1. Set the `sql.stats.automatic_collection.enabled` cluster setting to `false`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING sql.stats.automatic_collection.enabled = false; - ~~~ - -1. Use the [`SHOW STATISTICS`](show-statistics.html) statement to view automatically generated statistics. - -1. Delete the automatically generated statistics: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > DELETE FROM system.table_statistics WHERE true; - ~~~ - -1. Restart the nodes in your cluster to clear the statistics caches. - -To learn how to manually generate statistics, see the [`CREATE STATISTICS` examples](create-statistics.html#examples). - -### Enable and disable automatic statistics collection for tables - -Statistics collection can be expensive for large tables, and you may prefer to defer collection until after data is finished loading or during off-peak hours. Tables that are frequently updated, including small tables, may trigger statistics collection more often, which can lead to unnecessary overhead and unpredictable query plan changes. - -You can enable and disable automatic statistics collection for individual tables using the `sql_stats_automatic_collection_enabled` storage parameter. For example: - -~~~ sql -CREATE TABLE accounts ( - id INT PRIMARY KEY, - balance DECIMAL) -WITH (sql_stats_automatic_collection_enabled = false); -~~~ - -The table setting **takes precedence** over the cluster setting described in -[Enable and disable automatic statistics collection for clusters](#enable-and-disable-automatic-statistics-collection-for-clusters). - -You can set the table settings at table creation time or using [`ALTER TABLE ... SET`](set-storage-parameter.html): - -~~~ sql -CREATE TABLE accounts ( - id INT PRIMARY KEY, - balance DECIMAL); - -ALTER TABLE accounts -SET (sql_stats_automatic_collection_enabled = false); -~~~ - -The current table settings are shown in the `WITH` clause output of `SHOW CREATE TABLE`: - -~~~ sql - table_name | create_statement --------------+--------------------------------------------------------- - accounts | CREATE TABLE public.accounts ( - | id INT8 NOT NULL, - | balance DECIMAL NULL, - | CONSTRAINT accounts_pkey PRIMARY KEY (id ASC) - | ) WITH (sql_stats_automatic_collection_enabled = false) -(1 row) -~~~ - -`ALTER TABLE accounts RESET (sql_stats_automatic_collection_enabled)` removes the table setting, in which case -the cluster setting is in effect for the table. - -The "stale row" cluster settings discussed in [Control statistics refresh rate](#control-statistics-refresh-rate) have table -setting counterparts `sql_stats_automatic_collection_fraction_stale_rows` and `sql_stats_automatic_collection_min_stale_rows`. For example: - -~~~ sql -CREATE TABLE accounts ( - id INT PRIMARY KEY, - balance DECIMAL) -WITH (sql_stats_automatic_collection_enabled = true, -sql_stats_automatic_collection_min_stale_rows = 1000000, -sql_stats_automatic_collection_fraction_stale_rows= 0.05 -); - -ALTER TABLE accounts -SET (sql_stats_automatic_collection_fraction_stale_rows = 0.1, -sql_stats_automatic_collection_min_stale_rows = 2000); -~~~ - -Automatic statistics rules are checked once per minute. While altered automatic statistics table settings take immediate effect for any subsequent DML statements on a table, running row mutations that started prior to modifying the table settings may still trigger statistics collection based on the settings that existed before you ran the `ALTER TABLE ... SET` statement. - -### Control histogram collection - -By default, the optimizer collects histograms for all index columns (specifically the first column in each index) during automatic statistics collection. If a single column statistic is explicitly requested using manual invocation of [`CREATE STATISTICS`](create-statistics.html), a histogram will be collected, regardless of whether or not the column is part of an index. - -{{site.data.alerts.callout_info}} -CockroachDB does not support: - -- Histograms on [`ARRAY`-typed](array.html) columns. As a result, statistics created on `ARRAY`-typed columns do not include histograms. -- Multi-column histograms. -{{site.data.alerts.end}} - -If you are an advanced user and need to disable histogram collection for troubleshooting or performance tuning reasons, change the [`sql.stats.histogram_collection.enabled` cluster setting](cluster-settings.html) by running [`SET CLUSTER SETTING`](set-cluster-setting.html) as follows: - -{% include_cached copy-clipboard.html %} -~~~ sql -SET CLUSTER SETTING sql.stats.histogram_collection.enabled = false; -~~~ - -When `sql.stats.histogram_collection.enabled` is set to `false`, histograms are never collected, either as part of automatic statistics collection or by manually invoking [`CREATE STATISTICS`](create-statistics.html). - -### Control whether the `avg_size` statistic is used to cost scans - -{% include_cached new-in.html version="v22.1" %} The `avg_size` table statistic represents the average size of a table column. -If a table does not have an average size statistic available for a column, it uses the default value of 4 bytes. - -The optimizer uses `avg_size` to cost scans and relevant joins. Costing scans per row regardless of the size of the columns comprising the row doesn't account for time to read or transport a large number of bytes over the network. This can lead to undesirable plans when there are multiple options for scans or joins that read directly from tables. - -Cockroach Labs recommends that you allow the optimizer to consider column size when costing plans. If you are an advanced user and need to disable using `avg_size` for troubleshooting or performance tuning reasons, you can disable it by setting the `cost_scans_with_default_col_size` [session variable](set-vars.html) to true with `SET cost_scans_with_default_col_size=true`. - -## Control whether the optimizer creates a plan with a full scan - -Even if you have [secondary indexes](schema-design-indexes.html), the optimizer may determine that a full table scan will be faster. For example, if you add a secondary index to a table with a large number of rows and find that a statement plan is not using the secondary index, then it is likely that performing a full table scan using the primary key is faster than doing a secondary index scan plus an [index join](indexes.html#example). - -You can disable statement plans that perform full table scans with the `disallow_full_table_scans` [session variable](set-vars.html). - -If you disable full scans, you can set the `large_full_scan_rows` session variable to specify the maximum table size allowed for a full scan. If no alternative plan is possible, the optimizer will return an error. - -If you disable full scans, and you provide an [index hint](table-expressions.html#force-index-selection), the optimizer will try to avoid a full scan while also respecting the index hint. If this is not possible, the optimizer will return an error. If you do not provide an index hint, the optimizer will return an error, the full scan will be logged, and the `sql.guardrails.full_scan_rejected.count` [metric](ui-overview-dashboard.html) will be updated. - -## Locality optimized search in multi-region clusters - -In [multi-region deployments](multiregion-overview.html) with [regional by row tables](multiregion-overview.html#regional-by-row-tables), the optimizer, in concert with the [SQL engine](architecture/sql-layer.html), may perform a *locality optimized search* to attempt to avoid high-latency, cross-region communication between nodes. If there is a possibility that the results of a query all live in local rows, the database will first search for rows in the gateway node's region. The search only continues in remote regions if rows in the local region did not satisfy the query. Examples of queries that can use locality optimized search include unique key lookups and queries with [`LIMIT`](limit-offset.html) clauses. - -Even if a value cannot be read locally, CockroachDB takes advantage of the fact that some of the other regions are much closer than others and thus can be queried with lower latency. In this case, it performs all lookups against the remote regions in parallel and returns the result once it is retrieved, without having to wait for each lookup to come back. This can lead to increased performance in multi-region deployments, since it means that results can be returned from wherever they are first found without waiting for all of the other lookups to return. - -{{site.data.alerts.callout_info}} -The asynchronous parallel lookup behavior does not occur if you [disable vectorized execution](vectorized-execution.html#configure-vectorized-execution). -{{site.data.alerts.end}} - -Locality optimized search is supported for scans that are guaranteed to return 100,000 keys or fewer. This optimization allows the execution engine to avoid visiting remote regions if all requested keys are found in the local region, thus reducing the latency of the query. - -### Limitations - -{% include {{page.version.version}}/sql/locality-optimized-search-limited-records.md %} - -{% include {{page.version.version}}/sql/locality-optimized-search-virtual-computed-columns.md %} - -## Query plan cache - -CockroachDB uses a cache for the query plans generated by the optimizer. This can lead to faster query execution since the database can reuse a query plan that was previously calculated, rather than computing a new plan each time a query is executed. - -The query plan cache is enabled by default. To disable it, execute the following statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING sql.query_cache.enabled = false; -~~~ - -Only the following statements use the plan cache: - -- [`SELECT`](select-clause.html) -- [`INSERT`](insert.html) -- [`UPDATE`](update.html) -- [`UPSERT`](upsert.html) -- [`DELETE`](delete.html) - -The optimizer can use cached plans if they are: - -- Prepared statements. -- Non-prepared statements using identical constant values. - -## Join reordering - -For a query involving multiple joins, the cost-based optimizer will explore additional [join orderings](joins.html) in an attempt to find the lowest-cost execution plan, which can lead to significantly better performance in some cases. - -Because this process leads to an exponential increase in the number of possible execution plans for such queries, it's only used to reorder subtrees containing 8 or fewer joins by default. - -To change this setting, which is controlled by the `reorder_joins_limit` [session variable](set-vars.html), run the following statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SET reorder_joins_limit = 0; -~~~ - -To disable this feature, set the variable to `0`. You can configure the default `reorder_joins_limit` session setting with the [cluster setting](cluster-settings.html) `sql.defaults.reorder_joins_limit`, which has a default value of `8`. - -{{site.data.alerts.callout_danger}} -To avoid performance degradation, Cockroach Labs strongly recommends setting this value to a maximum of 8. If set too high, the cost of generating and costing execution plans can end up dominating the total execution time of the query. -{{site.data.alerts.end}} - -For more information about selecting an optimal join ordering, see our blog post [An Introduction to Join Ordering](https://www.cockroachlabs.com/blog/join-ordering-pt1/). - -### Reduce planning time for queries with many joins - -The cost-based optimizer explores multiple join orderings to find the lowest-cost plan. If there are many joins or join subtrees in the query, this can increase the number of execution plans the optimizer explores, and therefore the exploration and planning time. If the planning phase of a query takes a long time (on the order of multiple seconds or minutes) to plan, or the query plan involves many joins, consider the following alternatives to reduce the planning time: - -- To limit the size of the subtree that can be reordered, set the `reorder_joins_limit` [session variable](set-vars.html) to a lower value, for example: - - ~~~ sql - SET reorder_joins_limit = 2; - ~~~ - - If the join ordering inherent in the query is acceptable, for the shortest planning time, you can set `reorder_joins_limit` to `0`. This disables exploration of join orderings entirely. - - By reducing `reorder_joins_limit` CockroachDB reduces the number of plans explored, so a less efficient plan may be chosen by the optimizer. - - If one query has a slow planning time, you can avoid interfering with other query plans by setting `reorder_joins_limit` to the desired lower value before executing that query and resetting the session variable to the default after executing the query. - -- If setting and resetting the session variable is cumbersome or if there are multiple independent joins in the query where some may benefit from join reordering, you can use a [join hint](#join-hints). If the join has a hint specifying the type of join to something other than the default `INNER` (i.e., `INNER LOOKUP`, `MERGE`, `HASH`, etc.), join reordering will be disabled and the plan will respect the join order inherent in the way the query is written. This works at the expression level and doesn't affect the entire query (for instance, if you have a union of two joins they are independent join expressions). - -## Join hints - -To force the use of a specific join algorithm even if the optimizer determines that a different plan would have a lower cost, you can use a _join hint_. You specify a join hint as ` JOIN`. For example: - -- `INNER HASH JOIN` -- `OUTER MERGE JOIN` -- `LEFT LOOKUP JOIN` -- `CROSS HASH JOIN` -- `INNER INVERTED JOIN` -- `LEFT INVERTED JOIN` - -{{site.data.alerts.callout_info}} -Due to SQL's implicit `AS` syntax, you cannot specify a join hint with only the join algorithm keyword (e.g., `MERGE`). For example, `a MERGE JOIN b` will be interpreted as having an implicit `AS` and be executed as `a AS MERGE JOIN b`, which is equivalent to `a JOIN b`. Because the resulting query might execute without returning any hint-related error (because it is valid SQL), it will seem like the join hint "worked", but actually it didn't affect which join algorithm was used. The correct syntax is `a INNER MERGE JOIN b`. -{{site.data.alerts.end}} - -For a join hint example, see [Use the right join type](apply-statement-performance-rules.html#rule-3-use-the-right-join-type). - -### Supported join algorithms - -- `HASH`: Forces a hash join; in other words, it disables merge and lookup joins. A hash join is always possible, even if there are no equality columns - CockroachDB considers the nested loop join with no index a degenerate case of the hash join (i.e., a hash table with one bucket). - -- `MERGE`: Forces a merge join, even if it requires re-sorting both sides of the join. - -- `LOOKUP`: Forces a lookup join into the right side; the right side must be a table with a suitable index. Note that `LOOKUP` can only be used with `INNER` and `LEFT` joins. - -- `INVERTED`: Forces an inverted join into the right side; the right side must be a table with a suitable [GIN index](inverted-indexes.html). Note that `INVERTED` can only be used with `INNER` and `LEFT` joins. - - {{site.data.alerts.callout_info}} - You cannot use inverted joins on [partial GIN indexes](inverted-indexes.html#partial-gin-indexes). - {{site.data.alerts.end}} - -If it is not possible to use the algorithm specified in the hint, an error is signaled. - -{{site.data.alerts.callout_info}} -To make the optimizer prefer lookup joins to merge joins when performing foreign key checks, set the `prefer_lookup_joins_for_fks` [session variable](set-vars.html) to `on`. -{{site.data.alerts.end}} - -### Additional considerations - -- This syntax is consistent with the [SQL Server syntax for join hints](https://docs.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql-join?view=sql-server-2017), except that: - - - SQL Server uses `LOOP` instead of `LOOKUP`. - - - CockroachDB does not support `LOOP` and instead supports `LOOKUP` for the specific case of nested loop joins with an index. - -- When you specify a join hint, the two tables will not be reordered by the optimizer. The reordering behavior has the following characteristics, which can be affected by hints: - - - Given `a JOIN b`, CockroachDB will not try to commute to `b JOIN a`. This means that you will need to pay attention to this ordering, which is especially important for lookup joins. Without a hint, `a JOIN b` might be executed as `b INNER LOOKUP JOIN a` using an index into `a`, whereas `a INNER LOOKUP JOIN b` requires an index into `b`. - - - `(a JOIN b) JOIN c` might be changed to `a JOIN (b JOIN c)`, but this does not happen if `a JOIN b` uses a hint; the hint forces that particular join to happen as written in the query. - -- You should reconsider hint usage with each new release of CockroachDB. Due to improvements in the optimizer, hints specified to work with an older version may cause decreased performance in a newer version. - -## Zigzag joins - -The optimizer may plan a zigzag join when there are at least **two secondary indexes on the same table** and the table is filtered in a query with at least two filters constraining different attributes to a constant. A zigzag join works by "zigzagging" back and forth between two indexes and returning only rows with matching primary keys within a specified range. For example: - -~~~sql -CREATE TABLE abc ( - a INT, - b INT, - INDEX (a), - INDEX (b) -); - -EXPLAIN SELECT * FROM abc WHERE a = 10 AND b = 20; -~~~ -~~~ - info ----------------------------------- - distribution: local - vectorized: true - - • zigzag join - pred: (a = 10) AND (b = 20) - left table: abc@abc_a_idx - left columns: (a) - left fixed values: 1 column - right table: abc@abc_b_idx - right columns: (b) - right fixed values: 1 column -(11 rows) -~~~ - -### Prevent or force a zigzag join - -The optimizer supports index hints to prevent or force a zigzag join. Apply the hints in the same way as other existing [index hints](table-expressions.html#force-index-selection). - -To prevent the optimizer from planning a zigzag join for the specified table, use the hint `NO_ZIGZAG_JOIN`. For example: - -~~~ sql -SELECT * FROM abc@{NO_ZIGZAG_JOIN}; -~~~ - -{% include_cached new-in.html version="v22.1" %} To force the optimizer to plan a zigzag join for the specified table, use the hint `FORCE_ZIGZAG`. For example: - -~~~ sql -SELECT * FROM abc@{FORCE_ZIGZAG}; -~~~ - -{{site.data.alerts.callout_danger}} -If you have an index named `FORCE_ZIGZAG` and use the hint `table@{FORCE_ZIGZAG}` it will no longer have the same behavior. -{{site.data.alerts.end}} - -## Inverted join examples - -{% include {{ page.version.version }}/sql/inverted-joins.md %} - -## Known limitations - -* {% include {{page.version.version}}/known-limitations/old-multi-col-stats.md %} -* {% include {{page.version.version}}/known-limitations/single-col-stats-deletion.md %} -* {% include {{page.version.version}}/known-limitations/stats-refresh-upgrade.md %} - -## See also - -- [`JOIN` expressions](joins.html) -- [`SET {session variable}`](set-vars.html) -- [`SET CLUSTER SETTING`](set-cluster-setting.html) -- [`RESET CLUSTER SETTING`](reset-cluster-setting.html) -- [`SHOW {session variable}`](show-vars.html) -- [`CREATE STATISTICS`](create-statistics.html) -- [`SHOW STATISTICS`](show-statistics.html) -- [`EXPLAIN`](explain.html) diff --git a/src/current/v22.1/crdb-internal.md b/src/current/v22.1/crdb-internal.md deleted file mode 100644 index 7d1d8d16231..00000000000 --- a/src/current/v22.1/crdb-internal.md +++ /dev/null @@ -1,1332 +0,0 @@ ---- -title: crdb_internal -summary: The crdb_internal schema contains read-only views that you can use for introspection into CockroachDB internals. -toc: true -docs_area: reference.sql ---- - -The `crdb_internal` [system catalog](system-catalogs.html) is a schema that contains information about internal objects, processes, and metrics related to a specific database. `crdb_internal` tables are read-only. - - - -## Tables - -{{site.data.alerts.callout_danger}} -Do not use the `crdb_internal` tables marked with ✗ in production environments for the following reasons: - -- The contents of these tables are unstable, and subject to change in new releases of CockroachDB, without prior notice. -- There are memory and latency costs associated with each table in `crdb_internal`. Accessing the tables in the schema can impact cluster stability and performance. -{{site.data.alerts.end}} - -To view the schema and query examples for a table supported in production, click the table name. - -Table name | Description| Use in production ------------|------------|------------------- -`active_range_feeds` | Contains information about [range feeds](architecture/distribution-layer.html) on nodes in your cluster. | ✗ -`backward_dependencies` | Contains information about backward dependencies.| ✗ -`builtin_functions` | Contains information about supported [functions](functions-and-operators.html).| ✗ -[`cluster_contended_indexes`](#cluster_contended_indexes) | Contains information about [contended](performance-best-practices-overview.html#transaction-contention) indexes in your cluster.| ✓ -[`cluster_contended_keys`](#cluster_contended_keys) | Contains information about [contended](performance-best-practices-overview.html#transaction-contention) keys in your cluster.| ✓ -[`cluster_contended_tables`](#cluster_contended_tables) | Contains information about [contended](performance-best-practices-overview.html#transaction-contention) tables in your cluster.| ✓ -[`cluster_contention_events`](#cluster_contention_events) | Contains information about [contention](performance-best-practices-overview.html#transaction-contention) in your cluster.| ✓ -[`cluster_locks`](#cluster_locks) | Contains information about [locks](architecture/transaction-layer.html#concurrency-control) held by [transactions](transactions.html) on specific [keys](architecture/overview.html#architecture-range). | ✓ -`cluster_database_privileges` | Contains information about the [database privileges](security-reference/authorization.html#privileges) on your cluster.| ✗ -`cluster_distsql_flows` | Contains information about the flows of the [DistSQL execution](architecture/sql-layer.html#distsql) scheduled in your cluster.| ✗ -`cluster_inflight_traces` | Contains information about in-flight [tracing](show-trace.html) in your cluster.| ✗ -[`cluster_queries`](#cluster_queries) | Contains information about queries running on your cluster.| ✓ -[`cluster_sessions`](#cluster_sessions) | Contains information about cluster sessions, including current and past queries.| ✓ -`cluster_settings` | Contains information about [cluster settings](cluster-settings.html).| ✗ -[`cluster_transactions`](#cluster_transactions) | Contains information about transactions running on your cluster.| ✓ -`create_statements` | Contains information about tables and indexes in your database.| ✗ -`create_type_statements` | Contains information about [user-defined types](enum.html) in your database.| ✗ -`cross_db_references` | Contains information about objects that reference other objects, such as [foreign keys](foreign-key.html) or [views](views.html), across databases in your cluster.| ✗ -`databases` | Contains information about the databases in your cluster.| ✗ -`default_privileges` | Contains information about per-database default [privileges](security-reference/authorization.html#privileges).| ✗ -`feature_usage` | Contains information about feature usage on your cluster.| ✗ -`forward_dependencies` | Contains information about forward dependencies.| ✗ -`gossip_alerts` | Contains information about gossip alerts.| ✗ -`gossip_liveness` | Contains information about your cluster's gossip liveness.| ✗ -`gossip_network` | Contains information about your cluster's gossip network.| ✗ -`gossip_nodes` | Contains information about nodes in your cluster's gossip network.| ✗ -`index_columns` | Contains information about [indexed](indexes.html) columns in your cluster.| ✗ -[`index_usage_statistics`](#index_usage_statistics) | Contains statistics about the primary and secondary indexes used in statements.| ✓ -`invalid_objects` | Contains information about invalid objects in your cluster.| ✗ -`jobs` | Contains information about [jobs](show-jobs.html) running on your cluster.| ✗ -`kv_node_liveness` | Contains information about [node liveness](cluster-setup-troubleshooting.html#node-liveness-issues).| ✗ -`kv_node_status` | Contains information about node status at the [key-value layer](architecture/storage-layer.html).| ✗ -`kv_store_status` | Contains information about the key-value store for your cluster.| ✗ -`leases` | Contains information about [leases](architecture/replication-layer.html#leases) in your cluster.| ✗ -`lost_descriptors_with_data` | Contains information about table descriptors that have been deleted but still have data left over in storage.| ✗ -`node_build_info` | Contains information about nodes in your cluster.| ✗ -`node_contention_events`| Contains information about contention on the gateway node of your cluster.| ✗ -`node_distsql_flows` | Contains information about the flows of the [DistSQL execution](architecture/sql-layer.html#distsql) scheduled on nodes in your cluster.| ✗ -`node_inflight_trace_spans` | Contains information about currently in-flight spans in the current node.| ✗ -`node_metrics` | Contains metrics for nodes in your cluster.| ✗ -`node_queries` | Contains information about queries running on nodes in your cluster.| ✗ -`node_runtime_info` | Contains runtime information about nodes in your cluster.| ✗ -`node_sessions` | Contains information about sessions to nodes in your cluster.| ✗ -`node_statement_statistics` | Contains statement statistics for nodes in your cluster.| ✗ -`node_transaction_statistics` | Contains transaction statistics for nodes in your cluster.| ✗ -`node_transactions` | Contains information about transactions for nodes in your cluster.| ✗ -`node_txn_stats` | Contains transaction statistics for nodes in your cluster.| ✗ -`partitions` | Contains information about [partitions](partitioning.html) in your cluster.| ✗ -`predefined_comments` | Contains predefined comments about your cluster.| ✗ -`ranges` | Contains information about ranges in your cluster.| ✗ -`ranges_no_leases` | Contains information about ranges in your cluster, without leases.| ✗ -`regions` | Contains information about [cluster regions](multiregion-overview.html#cluster-regions).| ✗ -`schema_changes` | Contains information about schema changes in your cluster.| ✗ -`session_trace` | Contains session trace information for your cluster.| ✗ -`session_variables` | Contains information about [session variables](set-vars.html) in your cluster.| ✗ -[`statement_statistics`](#statement_statistics) | Aggregates in-memory and persisted [statistics](ui-statements-page.html#statement-statistics) from `system.statement_statistics` within hourly time intervals based on UTC time, rounded down to the nearest hour. To reset the statistics call `SELECT crdb_internal.reset_sql_stats()`.| ✓ -`table_columns` | Contains information about table columns in your cluster.| ✗ -`table_indexes` | Contains information about table indexes in your cluster.| ✗ -`table_row_statistics` | Contains row count statistics for tables in the current database.| ✗ -`tables` | Contains information about tables in your cluster.| ✗ -[`transaction_contention_events`](#transaction_contention_events)| Contains information about historical transaction [contention](performance-best-practices-overview.html#transaction-contention) events. | ✓ -[`transaction_statistics`](#transaction_statistics) | Aggregates in-memory and persisted [statistics](ui-transactions-page.html#transaction-statistics) from `system.transaction_statistics` within hourly time intervals based on UTC time, rounded down to the nearest hour. To reset the statistics, call `SELECT crdb_internal.reset_sql_stats()`.| ✓ -`zones` | Contains information about [zone configurations](configure-replication-zones.html) in your cluster.| ✗ - -## List `crdb_internal` tables - -To list the `crdb_internal` tables for the [current database](sql-name-resolution.html#current-database), use the following [`SHOW TABLES`](show-tables.html) statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM crdb_internal; -~~~ - -~~~ - schema_name | table_name | type | owner | estimated_row_count | locality -----------------+---------------------------------+-------+-------+---------------------+----------- - crdb_internal | active_range_feeds | table | NULL | NULL | NULL - crdb_internal | backward_dependencies | table | NULL | NULL | NULL - crdb_internal | builtin_functions | table | NULL | NULL | NULL - crdb_internal | cluster_contended_indexes | view | NULL | NULL | NULL - crdb_internal | cluster_contended_keys | view | NULL | NULL | NULL - crdb_internal | cluster_contended_tables | view | NULL | NULL | NULL - crdb_internal | cluster_contention_events | table | NULL | NULL | NULL - crdb_internal | cluster_database_privileges | table | NULL | NULL | NULL - crdb_internal | cluster_distsql_flows | table | NULL | NULL | NULL - crdb_internal | cluster_inflight_traces | table | NULL | NULL | NULL - ... -~~~ - -## Query `crdb_internal` tables - -To get detailed information about objects, processes, or metrics related to your database, you can read from the `crdb_internal` table that corresponds to the item of interest. - -{{site.data.alerts.callout_success}} -- To ensure that you can view all of the tables in `crdb_internal`, query the tables as a user with the [`admin` role](security-reference/authorization.html#admin-role). -- Unless specified otherwise, queries to `crdb_internal` assume the [current database](sql-name-resolution.html#current-database). -{{site.data.alerts.end}} - -For example, to return the `crdb_internal` table for the index usage statistics of the [`movr`](movr.html) database, you can use the following statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM movr.crdb_internal.index_usage_statistics; -~~~ - -~~~ - table_id | index_id | total_reads | last_read ------------+----------+-------------+-------------------------------- - 53 | 1 | 36792 | 2021-12-02 22:35:39.270713+00 - 54 | 1 | 24527 | 2021-12-02 22:35:39.053428+00 - 54 | 2 | 582120 | 2021-12-02 22:35:39.985883+00 - 55 | 1 | 309194 | 2021-12-02 22:35:39.619138+00 - 55 | 2 | 1 | 2021-12-02 00:28:26.176012+00 - 55 | 3 | 1 | 2021-12-02 00:28:31.122689+00 - 56 | 1 | 1 | 2021-12-02 00:28:32.074418+00 - 57 | 1 | 6116 | 2021-12-02 22:34:50.446242+00 - 58 | 1 | 3059 | 2021-12-02 22:34:50.447769+00 -~~~ - -## Table schema - -This section provides the schema and examples for tables supported in production. - -### `cluster_contended_indexes` - -Column | Type | Description -------------|-----|------------ -`database_name` | `STRING` | The name of the database experiencing [contention](performance-best-practices-overview.html#transaction-contention). -`schema_name` | `STRING` | The name of the schema experiencing [contention](performance-best-practices-overview.html#transaction-contention). -`table_name` | `STRING` | The name of the table experiencing [contention](performance-best-practices-overview.html#transaction-contention). -`index_name` | `STRING` | The name of the index experiencing [contention](performance-best-practices-overview.html#transaction-contention). -`num_contention_events` | `INT8` | The number of [contention](performance-best-practices-overview.html#transaction-contention) events. - -#### View all indexes that have experienced contention - -{% include_cached copy-clipboard.html %} -~~~sql -SELECT * FROM movr.crdb_internal.cluster_contended_indexes; -~~~ -~~~ - - database_name | schema_name | table_name | index_name | num_contention_events -----------------+-------------+------------+---------------------------------------+------------------------ - movr | public | vehicles | vehicles_auto_index_fk_city_ref_users | 2 -~~~ - -### `cluster_contended_keys` - -Column | Type | Description -------------|-----|------------ -`database_name` | `STRING` | The name of the database experiencing [contention](performance-best-practices-overview.html#transaction-contention). -`schema_name` | `STRING` | The name of the schema experiencing [contention](performance-best-practices-overview.html#transaction-contention). -`table_name` | `STRING` | The name of the table experiencing [contention](performance-best-practices-overview.html#transaction-contention). -`index_name` | `STRING` | The name of the index experiencing [contention](performance-best-practices-overview.html#transaction-contention). -`key` | `BYTES` | The key experiencing [contention](performance-best-practices-overview.html#transaction-contention). -`num_contention_events` | `INT8` | The number of [contention](performance-best-practices-overview.html#transaction-contention) events. - -#### View all keys that have experienced contention - -{% include_cached copy-clipboard.html %} -~~~sql -SELECT table_name, index_name, key, num_contention_events FROM movr.crdb_internal.cluster_contended_keys where database_name = 'movr'; -~~~ - -~~~ - table_name | index_name | key | num_contention_events --------------+---------------------------------------+-----------------------------------------------------------------------------------------------------------------------------+------------------------ - vehicles | vehicles_auto_index_fk_city_ref_users | /107/2/"amsterdam"/"\xe2\xc2\xdcJ$\xf3A\xa2\x98\xad.\xe6\x1e\xf4gLݷ95\xfc0\x95\xea["/0 | 1 - vehicles | vehicles_auto_index_fk_city_ref_users | /107/2/"los angeles"/"h\xc8\xfa\xc5J\xf8A7\xbe\x98\xa3\x94\x8e\xf4\x991"/",\u007fh)\"\x92G̞\xde\xeb\x973\xdfK\xb4"/0 | 1 - vehicles | rides_auto_index_fk_city_ref_users | /107/2/"amsterdam"/"\xe2\xc2\xdcJ$\xf3A\xa2\x98\xad.\xe6\x1e\xf4gLݷ95\xfc0\x95\xea["/0 | 1 - vehicles | rides_auto_index_fk_city_ref_users | /107/2/"los angeles"/"h\xc8\xfa\xc5J\xf8A7\xbe\x98\xa3\x94\x8e\xf4\x991"/",\u007fh)\"\x92G̞\xde\xeb\x973\xdfK\xb4"/0 | 1 -(18 rows) 6 -~~~ - -### `cluster_contended_tables` - -Column | Type | Description -------------|-----|------------ -`database_name` | `STRING` | The name of the database experiencing [contention](performance-best-practices-overview.html#transaction-contention). -`schema_name` | `STRING` | The name of the schema experiencing [contention](performance-best-practices-overview.html#transaction-contention). -`table_name` | `STRING` | The name of the table experiencing [contention](performance-best-practices-overview.html#transaction-contention). -`num_contention_events` | `INT8` | The number of [contention](performance-best-practices-overview.html#transaction-contention) events. - -#### View all tables that have experienced contention - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM movr.crdb_internal.cluster_contended_tables; -~~~ -~~~ - database_name | schema_name | table_name | num_contention_events -----------------+-------------+------------+------------------------ - movr | public | vehicles | 9 -(1 row) -~~~ - -### `cluster_contention_events` - -Column | Type | Description -------------|-----|------------ -`table_id` | `INT8` | Unique table identifier. -`index_id` | `INT8` | Unique index identifier. -`num_contention_events` | `INT8` | The number of [contention](performance-best-practices-overview.html#transaction-contention) events. -`cumulative_contention_time` | `INTERVAL` | The cumulative time that the transaction spends waiting in [contention](performance-best-practices-overview.html#transaction-contention). -`key` | `BYTES` | The key experiencing [contention](performance-best-practices-overview.html#transaction-contention). -`txn_id` | `UUID` | Unique transaction identifier. -`count` | `INT8` | The number of [contention](performance-best-practices-overview.html#transaction-contention) events. - -#### View all contention events - -{% include_cached copy-clipboard.html %} -~~~sql -SELECT * FROM crdb_internal.cluster_contention_events; -~~~ -~~~ - table_id | index_id | num_contention_events | cumulative_contention_time | key | txn_id | count ------------+----------+-----------------------+----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+-------- - 107 | 2 | 9 | 00:00:00.039563 | \xf3\x8a\x12amsterdam\x00\x01\x12\xe2\xc2\xdcJ$\xf3A\xa2\x98\xad.\xe6\x1e\xf4gL\xdd\xb795\xfc0\x95\xea[\x00\x01\x88 | 58b59e88-87b8-45eb-bc9b-34e6a6b53a2c | 1 - 107 | 2 | 9 | 00:00:00.039563 | \xf3\x8a\x12new york\x00\x01\x12\xe9\xf9<\x99\\\x18K\xa2\x8d\xd2a\xa1d\x937\n\x00\x01\x12\xcd\xae\x18\xd4\xe9\xb3A\xf3\xb5\x9c\x177\x8c\xf6\x0e\xc5\x00\x01\x88 | 9af8a934-7ffa-4715-b802-f532f15bea9c | 1 - 107 | 2 | 9 | 00:00:00.039563 | \xf3\x8a\x12new york\x00\x01\x12\xfd\x81\xc0UK\x9cEE\xa7\x14b\xdb\x02\xad\x80\xe8\x00\x01\x12a\xc0q\x82\x8e)@\x89\xa2\x9c\xcc\xdb\x01\x1d\x8e_\x00\x01\x88 | af7ee3f2-23b8-46ef-bef7-a11b1be73443 | 1 - 107 | 2 | 9 | 00:00:00.039563 | \xf3\x8a\x12paris\x00\x01\x12^\xef\xc4\xa6\xed\x0bFI\x89\xe7\xfe\xbd\x8em\xe0\xb8\x00\x01\x12F%\xffZ\xe8\x93N\xfc\xa6\x17\xc0S\xb6\x86\xdd\xec\x00\x01\x88 | 594bec92-af85-4a8d-bb4d-cc17d5bbd905 | 1 - 107 | 2 | 9 | 00:00:00.039563 | \xf3\x8a\x12rome\x00\x01\x12oA\xc5\x86\xd8\xcdE\xfb\xb6\xb7\x8e9\xb4\xae\xc1,\x00\x01\x12\xda\xb2@\x1f\x1d\x05L\xc9\x8c\xba\xb9\x97\x84\x9e\x98\x1d\x00\x01\x88 | 815dcf03-7a0a-4c3a-98df-be7d84c1fd13 | 1 - 107 | 2 | 9 | 00:00:00.039563 | \xf3\x8a\x12seattle\x00\x01\x127\xfe\x953~\x94F\x95\xac\xc3\xa0P\x8e_A\x18\x00\x01\x12\xcc\xce\xad\xa0\xcaoA\xda\x98\x81\xab\xe9\xb6\'\xc9\x90\x00\x01\x88 | 5d382859-e88d-497a-830b-613fd20ef304 | 1 - 107 | 2 | 9 | 00:00:00.039563 | \xf3\x8a\x12seattle\x00\x01\x12v\xd4J%\x90\x1bF\x0b\x97\x02v\xc7\xee\xa9\xc7R\x00\x01\x12s\xa1\xad\x8c\xca\xe4G\t\xadG\x91\xa3\xa4\xae\xb7\xc7\x00\x01\x88 | e83df970-a01a-470f-914d-f5615eeec620 | 1 -(9 rows) -~~~ - -#### View the tables/indexes with the most time under contention - -To view the [tables](create-table.html) and [indexes](indexes.html) with the most cumulative time under [contention](performance-best-practices-overview.html#transaction-contention) since the last server restart, run the query below. - -{{site.data.alerts.callout_info}} -The default tracing behavior captures a small percent of transactions so not all contention events will be recorded. When investigating transaction contention, you can set the `sql.trace.txn.enable_threshold` [cluster setting](cluster-settings.html) to always capture contention events. -{{site.data.alerts.end}} - -{% include_cached copy-clipboard.html %} -~~~ sql -WITH c AS (SELECT DISTINCT ON (table_id, index_id) table_id, index_id, num_contention_events AS events, cumulative_contention_time AS time FROM crdb_internal.cluster_contention_events) SELECT i.descriptor_name, i.index_name, c.events, c.time FROM crdb_internal.table_indexes AS i JOIN c ON i.descriptor_id = c.table_id AND i.index_id = c.index_id ORDER BY c.time DESC LIMIT 10; -~~~ - -~~~ - descriptor_name | index_name | events | time -------------------+----------------+--------+------------------ - warehouse | warehouse_pkey | 7 | 00:00:01.046293 - district | district_pkey | 1 | 00:00:00.191346 - stock | stock_pkey | 1 | 00:00:00.158207 - order | order_pkey | 1 | 00:00:00.155404 - new_order | new_order_pkey | 1 | 00:00:00.100949 -(5 rows) -~~~ - -(The output above is for a [local cluster](start-a-local-cluster.html) running the [TPC-C workload](cockroach-workload.html#tpcc-workload) at a `--concurrency` of 256.) - -### `cluster_locks` - -The `crdb_internal.cluster_locks` schema contains information about [locks](architecture/transaction-layer.html#concurrency-control) held by [transactions](transactions.html) on specific [keys](architecture/overview.html#architecture-range). Queries acquire locks on keys within transactions, or they wait until they can acquire locks until other transactions have released locks on those keys. - -For more information, see the following sections. - -- [Cluster locks columns](#cluster-locks-columns) -- [Cluster locks - basic example](#cluster-locks-basic-example) -- [Cluster locks - intermediate example](#cluster-locks-intermediate-example) -- [Blocked vs. blocking transactions](#blocked-vs-blocking-transactions) -- [Client sessions holding locks](#client-sessions-holding-locks) -- [Count locks held by sessions](#count-locks-held-by-sessions) -- [Count queries waiting on locks](#count-queries-waiting-on-locks) - -#### Cluster locks columns - -The `crdb_internal.cluster_locks` table has the following columns that describe each lock: - -| Column | Type | Description | -|-------------------+-------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `range_id` | [`INT`](int.html) | The ID of the [range](architecture/overview.html#architecture-range) that stores the key the lock is being acquired on. | -| `table_id` | [`INT`](int.html) | The ID of the [table](create-table.html) that includes the key the lock is being acquired on. | -| `database_name` | [`STRING`](string.html) | The name of the [database](create-database.html) that includes the key the lock is being acquired on. | -| `schema_name` | [`STRING`](string.html) | The name of the [schema](create-schema.html) that includes the key this lock is being acquired on. | -| `table_name` | [`STRING`](string.html) | The name of the [table](create-table.html) that includes the key this lock is being acquired on. | -| `index_name` | [`STRING`](string.html) | The name of the [index](indexes.html) that includes the key this lock is being acquired on. | -| `lock_key` | [`BYTES`](bytes.html) | The actual key that this lock is being acquired on. | -| `lock_key_pretty` | [`STRING`](string.html) | A string representation of the key this lock is being acquired on. | -| `txn_id` | [`UUID`](uuid.html) | The ID of the [transaction](transactions.html) that is acquiring this lock. | -| `ts` | [`TIMESTAMP`](timestamp.html) | The [timestamp](timestamp.html) at which this lock was acquired. | -| `lock_strength` | [`STRING`](string.html) | The strength of this lock. Allowed values: `"Exclusive"` or `"None"` (read-only requests don't need an exclusive lock). | -| `durability` | [`STRING`](string.html) | Whether the lock is one of: `Replicated` or `Unreplicated`. For more information about lock replication, see [types of locking](architecture/transaction-layer.html#writing). | -| `granted` | [`BOOLEAN`](bool.html) | Whether this lock has been granted to the [transaction](transactions.html) requesting it. | -| `contended` | [`BOOLEAN`](bool.html) | Whether multiple [transactions](transactions.html) are trying to acquire a lock on this key. | -| `duration` | [`INTERVAL`](interval.html) | The length of time this lock has been held for. | - -{{site.data.alerts.callout_success}} -You can see the types and default values of columns in this and other tables using [`SHOW COLUMNS FROM {table}`](show-columns.html). -{{site.data.alerts.end}} - -#### Cluster locks - basic example - -In this example, we'll use the [`SELECT FOR UPDATE`](select-for-update.html) statement to order two transactions by controlling concurrent access to a table. Then, we will look at the data in `cluster_locks` to see the locks being held by these transactions on the objects they are accessing. - -{% include {{page.version.version}}/sql/select-for-update-example-partial.md %} - -Now that we have two transactions both trying to update the `kv` table, let's query the data in `crdb_internal.cluster_locks`. We should see two locks: - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT database_name, table_name, txn_id, ts, lock_key_pretty, lock_strength, granted, contended FROM crdb_internal.cluster_locks WHERE table_name = 'kv'; -~~~ - -~~~ - database_name | table_name | txn_id | ts | lock_key_pretty | lock_strength | granted | contended -----------------+------------+--------------------------------------+----------------------------+------------------+---------------+---------+------------ - defaultdb | kv | d11a08c4-a3a2-4bdb-bf10-8d2373426faf | 2022-07-27 18:57:06.808046 | /Table/107/1/1/0 | Exclusive | true | true - defaultdb | kv | 34ebadb6-99f1-4547-b487-4b322506b7fe | 2022-07-27 18:57:13.173556 | /Table/107/1/1/0 | Exclusive | false | true -(2 rows) -~~~ - -As expected, there are two locks. This is the case because: - -- The transaction with the [`SELECT FOR UPDATE`](select-for-update.html) query in Terminal 1 asked for an `Exclusive` lock on a row in the `defaultdb.kv` table, as shown in the `lock_strength` column. We can see that it was able to get that lock, since the `granted` column is `true`. -- The transaction in Terminal 2 is also trying to lock the same row in the `kv` table with a `lock_strength` of `Exclusive`. However, the value of the `granted` column is `false`, which means it could not get the exclusive lock yet, and is waiting on the lock from the query in Terminal 1 to be released before it can proceed. - -Further, both transactions show the `contended` column as `true`, since these transactions are both trying to update rows in the `defaultdb.kv` table at the same time. - -The following more complex query shows additional information about lockholders, sessions, and waiting queries. This may be useful on a busy cluster for figuring out which transactions from which clients are trying to grab locks. Note that joining with `cluster_queries` will only show queries currently in progress. - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT - sessions.session_id, - sessions.client_address, - sessions.application_name, - locks.txn_id, - queries.query_id AS waiting_query_id, - queries.query AS waiting_query, - locks.lock_key_pretty, - locks.ts, - locks.database_name, - locks.schema_name, - locks.table_name, - locks.lock_strength, - locks.granted, - locks.contended -FROM - crdb_internal.cluster_locks AS locks - JOIN crdb_internal.cluster_sessions AS sessions ON - locks.txn_id::STRING = sessions.kv_txn - LEFT JOIN crdb_internal.cluster_queries AS queries ON - locks.txn_id = queries.txn_id -~~~ - -~~~ - session_id | client_address | application_name | txn_id | waiting_query_id | waiting_query | lock_key_pretty | ts | database_name | schema_name | table_name | lock_strength | granted | contended ------------------------------------+-----------------+------------------+--------------------------------------+----------------------------------+-----------------------------------------+------------------+----------------------------+---------------+-------------+------------+---------------+---------+------------ - 17056bb535ead9a00000000000000001 | 127.0.0.1:51093 | $ cockroach sql | ca692f0a-deca-4d4a-9a15-86f25c3b837f | NULL | NULL | /Table/107/1/1/0 | 2022-07-26 15:48:08.294631 | defaultdb | public | kv | Exclusive | true | true - 17056bb7e47a6ec00000000000000003 | 127.0.0.1:51094 | $ cockroach sql | 771fe98f-9e39-4ce4-90da-8ecb06d2a856 | 17056bbd362852780000000000000003 | SELECT * FROM kv WHERE k = 1 FOR UPDATE | /Table/107/1/1/0 | 2022-07-26 15:48:18.145594 | defaultdb | public | kv | Exclusive | false | true -(2 rows) -~~~ - -The output is similar to querying `cluster_locks` alone, except you can see the text of the SQL queries whose transactions are waiting on other transactions to finish, with additional information about the clients that initiated those transactions. - -{{site.data.alerts.callout_info}} -Locks are held by transactions, not queries. A lock can be acquired by a transaction as a result of a query, but CockroachDB does not track which query in a transaction caused that transaction to acquire a lock. -{{site.data.alerts.end}} - -#### Cluster locks - intermediate example - -This example assumes you have a cluster in the state it was left in by [the previous example](#cluster-locks-basic-example). - -In this example you will run a workload on the cluster with multiple concurrent transactions using the [bank workload](cockroach-workload.html#run-the-bank-workload). With a sufficiently high concurrency setting, the bank workload will frequently attempt to update multiple accounts at the same time. This will create plenty of locks to view in the `crdb_internal.cluster_locks` table. - -First, initialize the workload: - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach workload init bank 'postgresql://root@localhost:26257/bank?sslmode=disable' -~~~ - -Next, run it at a high concurrency setting: - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach workload run bank --concurrency=128 --duration=3m 'postgresql://root@localhost:26257/bank?sslmode=disable' -~~~ - -While the workload is running, issue the following query to view a subset of the locks being requested. - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT - database_name, - table_name, - txn_id, - ts, - lock_key_pretty, - lock_strength, - granted, - contended -FROM - crdb_internal.cluster_locks -WHERE table_name = 'bank' -LIMIT - 10 -~~~ - -~~~ - database_name | table_name | txn_id | ts | lock_key_pretty | lock_strength | granted | contended -----------------+------------+--------------------------------------+----------------------------+--------------------+---------------+---------+------------ - bank | bank | 7f0e262f-78e6-4a52-ad4e-d3cd5a851c82 | 2022-07-27 18:59:09.358877 | /Table/110/1/82/0 | Exclusive | true | false - bank | bank | c5bdc305-5940-43e1-8017-95260b4a1a39 | 2022-07-27 18:59:06.071559 | /Table/110/1/110/0 | Exclusive | true | true - bank | bank | ec872809-06b6-4320-b416-88b37c656f28 | 2022-07-27 18:59:05.843786 | /Table/110/1/110/0 | Exclusive | false | true - bank | bank | 7f4cd00d-2765-4b8d-b2e3-96e1c20e515e | 2022-07-27 18:59:06.345931 | /Table/110/1/110/0 | Exclusive | false | true - bank | bank | d6683639-a529-43b1-89ce-e7f0aa268426 | 2022-07-27 18:59:06.800857 | /Table/110/1/110/0 | Exclusive | false | true - bank | bank | ffbeb239-9fba-4cd8-8f20-ccebe3a069cd | 2022-07-27 18:59:07.485126 | /Table/110/1/110/0 | Exclusive | false | true - bank | bank | 7f64b1f5-e70e-4257-9d5b-5a26e48001cf | 2022-07-27 18:59:07.77492 | /Table/110/1/110/0 | Exclusive | false | true - bank | bank | 3e4ca7d5-77ef-474b-8ffa-4eeeffa0d190 | 2022-07-27 18:59:08.888788 | /Table/110/1/110/0 | Exclusive | false | true - bank | bank | 57d984d7-54a9-4c7f-893d-a8c67e748bbe | 2022-07-27 18:59:05.117683 | /Table/110/1/110/0 | Exclusive | false | true - bank | bank | 3c06319f-31c3-43ba-8323-9807e2c6de04 | 2022-07-27 18:59:05.117683 | /Table/110/1/110/0 | Exclusive | false | true -(10 rows) -~~~ - -As in the [basic example](#cluster-locks-basic-example), you can see that some transactions that wanted locks on the `bank` table are having to wait (`granted` is `false`), usually because they are trying to operate on the same rows as one or more other transactions (`contended` is `true`). - -The following more complex query shows additional information about lockholders, sessions, and waiting queries. This may be useful on a busy cluster for figuring out which transactions from which clients are trying to grab locks. Note that joining with `cluster_queries` will only show queries currently in progress. - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT - sessions.session_id, - sessions.client_address, - sessions.application_name, - locks.txn_id, - queries.query_id AS waiting_query_id, - queries.query AS waiting_query, - locks.lock_key_pretty, - locks.ts, - locks.database_name, - locks.schema_name, - locks.table_name, - locks.lock_strength, - locks.granted, - locks.contended -FROM - crdb_internal.cluster_locks AS locks - JOIN crdb_internal.cluster_sessions AS sessions ON - locks.txn_id::STRING = sessions.kv_txn - LEFT JOIN crdb_internal.cluster_queries AS queries ON - locks.txn_id = queries.txn_id -WHERE - locks.table_name = 'bank' -LIMIT - 10 -~~~ - -~~~ - session_id | client_address | application_name | txn_id | waiting_query_id | waiting_query | lock_key_pretty | ts | database_name | schema_name | table_name | lock_strength | granted | contended ------------------------------------+-----------------+------------------+--------------------------------------+----------------------------------+----------------------------------------------------------------------------------------------------------------------+--------------------+----------------------------+---------------+-------------+------------+---------------+---------+------------ - 17056cb7396db5900000000000000001 | 127.0.0.1:51220 | bank | 3bee08fd-636e-4d41-b28d-104bd0a4235b | 17056cded0850d800000000000000001 | UPDATE bank SET balance = CASE id WHEN 351 THEN balance - 725 WHEN 412 THEN balance + 725 END WHERE id IN (351, 412) | /Table/110/1/351/0 | 2022-07-26 16:09:31.215922 | bank | public | bank | Exclusive | false | true - 17056cb7396db5900000000000000001 | 127.0.0.1:51220 | bank | 3bee08fd-636e-4d41-b28d-104bd0a4235b | 17056cded0850d800000000000000001 | UPDATE bank SET balance = CASE id WHEN 351 THEN balance - 725 WHEN 412 THEN balance + 725 END WHERE id IN (351, 412) | /Table/110/1/412/0 | 2022-07-26 16:09:31.215922 | bank | public | bank | Exclusive | false | true - 17056cb73972ab180000000000000001 | 127.0.0.1:51221 | bank | 5c27678d-3acb-4d9d-8b98-774c5a10f933 | 17056ce6f3db1d080000000000000001 | UPDATE bank SET balance = CASE id WHEN 627 THEN balance - 794 WHEN 867 THEN balance + 794 END WHERE id IN (627, 867) | /Table/110/1/627/0 | 2022-07-26 16:09:36.945352 | bank | public | bank | Exclusive | true | false - 17056cb73972ab180000000000000001 | 127.0.0.1:51221 | bank | 5c27678d-3acb-4d9d-8b98-774c5a10f933 | 17056ce6f3db1d080000000000000001 | UPDATE bank SET balance = CASE id WHEN 627 THEN balance - 794 WHEN 867 THEN balance + 794 END WHERE id IN (627, 867) | /Table/110/1/867/0 | 2022-07-26 16:09:36.945352 | bank | public | bank | Exclusive | true | false - 17056cb739738da80000000000000001 | 127.0.0.1:51222 | bank | 1768cc1d-0929-490b-86bd-f39b9f78cc84 | 17056cdc278d47580000000000000001 | UPDATE bank SET balance = CASE id WHEN 351 THEN balance - 412 WHEN 412 THEN balance + 412 END WHERE id IN (351, 412) | /Table/110/1/351/0 | 2022-07-26 16:09:31.215922 | bank | public | bank | Exclusive | false | true - 17056cb739738da80000000000000001 | 127.0.0.1:51222 | bank | 1768cc1d-0929-490b-86bd-f39b9f78cc84 | 17056cdc278d47580000000000000001 | UPDATE bank SET balance = CASE id WHEN 351 THEN balance - 412 WHEN 412 THEN balance + 412 END WHERE id IN (351, 412) | /Table/110/1/412/0 | 2022-07-26 16:09:31.215922 | bank | public | bank | Exclusive | false | true - 17056cb739737e080000000000000001 | 127.0.0.1:51223 | bank | bf6ab816-67b7-4125-bf88-3d5c36986885 | 17056ce55b0065580000000000000001 | UPDATE bank SET balance = CASE id WHEN 137 THEN balance - 500 WHEN 895 THEN balance + 500 END WHERE id IN (137, 895) | /Table/110/1/137/0 | 2022-07-26 16:09:25.957139 | bank | public | bank | Exclusive | false | true - 17056cb739772b700000000000000001 | 127.0.0.1:51226 | bank | a9cff324-00bd-4e33-afc6-ff3341988c7e | 17056cd1cb394ab00000000000000001 | UPDATE bank SET balance = CASE id WHEN 498 THEN balance - 584 WHEN 690 THEN balance + 584 END WHERE id IN (498, 690) | /Table/110/1/690/0 | 2022-07-26 16:09:30.318277 | bank | public | bank | Exclusive | false | true - 17056cb7397b03d00000000000000001 | 127.0.0.1:51225 | bank | 5a591e0d-ba25-424f-aa13-502a7044355b | 17056cdb5243ea200000000000000001 | UPDATE bank SET balance = CASE id WHEN 110 THEN balance - 641 WHEN 895 THEN balance + 641 END WHERE id IN (110, 895) | /Table/110/1/110/0 | 2022-07-26 16:09:02.283466 | bank | public | bank | Exclusive | false | true - 17056cb7397bdaa80000000000000001 | 127.0.0.1:51227 | bank | 40a7f865-c6a6-40e2-8956-01f3f52b19f2 | 17056ccea1c21de00000000000000001 | UPDATE bank SET balance = CASE id WHEN 895 THEN balance - 506 WHEN 786 THEN balance + 506 END WHERE id IN (895, 786) | /Table/110/1/786/0 | 2022-07-26 16:08:19.842191 | bank | public | bank | Exclusive | false | true -(10 rows) -~~~ - -The output is similar to querying `cluster_locks` alone, except you can see the text of the SQL queries whose transactions are waiting on other transactions to finish, with additional information about the clients that initiated those transactions. - -{{site.data.alerts.callout_info}} -Locks are held by transactions, not queries. A lock can be acquired by a transaction as a result of a query within that transaction, but CockroachDB does not track which query in a transaction caused that transaction to acquire a lock. -{{site.data.alerts.end}} - -#### Blocked vs. blocking transactions - -Run the query below to display a list of pairs of [transactions](transactions.html) that are holding and waiting on locks for the same [keys](architecture/overview.html#architecture-range). - -This example assumes you are running the `bank` workload as described in the [intermediate example](#cluster-locks-intermediate-example). - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT - lh.database_name, - lh.table_name, - lh.range_id, - lh.lock_key_pretty, - lh.txn_id AS lock_holder, - lw.txn_id AS lock_waiter, - q.query AS waiting_query, - lw.duration AS wait_duration -FROM - crdb_internal.cluster_locks AS lh - JOIN crdb_internal.cluster_locks AS lw ON - lh.lock_key = lw.lock_key - JOIN crdb_internal.cluster_queries AS q ON - lw.txn_id = q.txn_id -WHERE - lh.granted = true - AND lh.txn_id IS DISTINCT FROM lw.txn_id - AND lh.table_name = 'bank' -LIMIT - 10 -~~~ - -~~~ - database_name | table_name | range_id | lock_key_pretty | lock_holder | lock_waiter | waiting_query | wait_duration -----------------+------------+----------+--------------------+--------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------+------------------ - bank | bank | 50 | /Table/110/1/412/0 | 89d8f3a6-463b-4119-89e9-c797680f35bb | ea5a95cc-923c-4262-b0f7-81014fd19ee1 | UPDATE bank SET balance = CASE id WHEN 351 THEN balance - 39 WHEN 412 THEN balance + 39 END WHERE id IN (351, 412) | 00:00:01.236359 - bank | bank | 50 | /Table/110/1/412/0 | 89d8f3a6-463b-4119-89e9-c797680f35bb | d4a80a8b-ede9-48ea-a7e1-375255f5aabe | UPDATE bank SET balance = CASE id WHEN 351 THEN balance - 551 WHEN 412 THEN balance + 551 END WHERE id IN (351, 412) | 00:00:01.268718 - bank | bank | 50 | /Table/110/1/412/0 | 89d8f3a6-463b-4119-89e9-c797680f35bb | 5406a762-1975-4346-ab7d-f32a4ff06469 | UPDATE bank SET balance = CASE id WHEN 351 THEN balance - 395 WHEN 412 THEN balance + 395 END WHERE id IN (351, 412) | 00:00:01.265879 - bank | bank | 50 | /Table/110/1/412/0 | 89d8f3a6-463b-4119-89e9-c797680f35bb | be93bcc1-921e-4b8a-9253-98f0848eef2c | UPDATE bank SET balance = CASE id WHEN 351 THEN balance - 725 WHEN 412 THEN balance + 725 END WHERE id IN (351, 412) | 00:00:01.273119 - bank | bank | 50 | /Table/110/1/412/0 | 89d8f3a6-463b-4119-89e9-c797680f35bb | a513d77f-9773-48c4-b888-1abbecba12ad | UPDATE bank SET balance = CASE id WHEN 351 THEN balance - 317 WHEN 412 THEN balance + 317 END WHERE id IN (351, 412) | 00:00:01.269164 - bank | bank | 50 | /Table/110/1/412/0 | 89d8f3a6-463b-4119-89e9-c797680f35bb | 56d14710-d40f-4bd3-a3f6-f8cc34996bad | UPDATE bank SET balance = CASE id WHEN 351 THEN balance - 758 WHEN 412 THEN balance + 758 END WHERE id IN (351, 412) | 00:00:01.271509 - bank | bank | 48 | /Table/110/1/839/0 | 1dd5d45e-98dd-4645-a1c2-9d1656a7c1e2 | 19ecea86-219d-4979-a7ed-04c6fa81e6db | UPDATE bank SET balance = CASE id WHEN 839 THEN balance - 830 WHEN 758 THEN balance + 830 END WHERE id IN (839, 758) | 00:00:00.775891 - bank | bank | 50 | /Table/110/1/412/0 | 89d8f3a6-463b-4119-89e9-c797680f35bb | 1a7ce7a8-6145-4d71-a05a-6a22e641a3d5 | UPDATE bank SET balance = CASE id WHEN 351 THEN balance - 711 WHEN 412 THEN balance + 711 END WHERE id IN (351, 412) | 00:00:01.268693 - bank | bank | 50 | /Table/110/1/412/0 | 89d8f3a6-463b-4119-89e9-c797680f35bb | 9c9da2b9-a468-45b6-af7f-88f0ade5c9f9 | UPDATE bank SET balance = CASE id WHEN 351 THEN balance - 642 WHEN 412 THEN balance + 642 END WHERE id IN (351, 412) | 00:00:01.267889 - bank | bank | 50 | /Table/110/1/412/0 | 89d8f3a6-463b-4119-89e9-c797680f35bb | 997183e4-f1c7-46eb-882b-5b4cf76ff1ab | UPDATE bank SET balance = CASE id WHEN 351 THEN balance - 296 WHEN 412 THEN balance + 296 END WHERE id IN (351, 412) | 00:00:01.269917 -(10 rows) -~~~ - -#### Client sessions holding locks - -Run the query below to display a list of [sessions](show-sessions.html) that are holding and waiting on locks for the same [keys](architecture/overview.html#architecture-range). - -This example assumes you are running the `bank` workload as described in the [intermediate example](#cluster-locks-intermediate-example). - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT - l.database_name, - l.table_name, - l.range_id, - l.lock_key_pretty, - l.txn_id, - l.granted, - s.node_id, - s.user_name, - s.client_address -FROM - crdb_internal.cluster_locks AS l - JOIN crdb_internal.cluster_transactions AS t ON - l.txn_id = t.id - JOIN crdb_internal.cluster_sessions AS s ON - t.session_id = s.session_id -WHERE - l.granted = true AND l.table_name = 'bank'; -~~~ - -~~~ - database_name | table_name | range_id | lock_key_pretty | txn_id | granted | node_id | user_name | client_address -----------------+------------+----------+--------------------+--------------------------------------+---------+---------+-----------+------------------ - bank | bank | 54 | /Table/110/1/140/0 | 7aa534d6-c6df-4a98-9ef5-33ddb7c20618 | true | 1 | root | 127.0.0.1:51398 - bank | bank | 53 | /Table/110/1/786/0 | 92490011-e63b-4157-b7e5-6bd17ac6e94c | true | 1 | root | 127.0.0.1:51400 - bank | bank | 50 | /Table/110/1/457/0 | 2e815ed3-4178-4eff-a5c5-b04fd6e76d50 | true | 1 | root | 127.0.0.1:51412 - bank | bank | 54 | /Table/110/1/137/0 | fd735721-d8a0-4157-b370-00f132bb6263 | true | 1 | root | 127.0.0.1:51425 - bank | bank | 54 | /Table/110/1/110/0 | e7acec14-dbf9-4cd9-9ae6-43adac826810 | true | 1 | root | 127.0.0.1:51443 - bank | bank | 49 | /Table/110/1/351/0 | 2ff62140-62d8-46e7-9c4a-c3f778519549 | true | 1 | root | 127.0.0.1:51427 - bank | bank | 50 | /Table/110/1/412/0 | 2ff62140-62d8-46e7-9c4a-c3f778519549 | true | 1 | root | 127.0.0.1:51427 - bank | bank | 48 | /Table/110/1/895/0 | bd676999-37f5-42b4-9278-7b3de5451abe | true | 1 | root | 127.0.0.1:51439 - bank | bank | 65 | /Table/110/1/60/0 | 51aacef3-370e-4760-8e8f-60e6e9db4a19 | true | 1 | root | 127.0.0.1:51461 - bank | bank | 54 | /Table/110/1/134/0 | 51aacef3-370e-4760-8e8f-60e6e9db4a19 | true | 1 | root | 127.0.0.1:51461 - bank | bank | 49 | /Table/110/1/316/0 | 5bffc2fa-43f1-4b76-acfa-2d1d4e09c793 | true | 1 | root | 127.0.0.1:51472 - bank | bank | 47 | /Table/110/1/504/0 | 5bffc2fa-43f1-4b76-acfa-2d1d4e09c793 | true | 1 | root | 127.0.0.1:51472 -(12 rows) -~~~ - -#### Count locks held by sessions - -Run the query below to show a list of lock counts being held by different [sessions](show-sessions.html). - -This example assumes you are running the `bank` workload as described in the [intermediate example](#cluster-locks-intermediate-example). - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT - l.database_name, - l.table_name, - s.user_name, - s.client_address, - s.session_id, - count(lock_key_pretty) AS lock_count -FROM - crdb_internal.cluster_locks AS l - JOIN crdb_internal.cluster_transactions AS t ON - l.txn_id = t.id - JOIN crdb_internal.cluster_sessions AS s ON - t.session_id = s.session_id -WHERE - l.granted = true AND l.table_name = 'bank' -GROUP BY - l.database_name, - l.table_name, - s.user_name, - s.client_address, - s.session_id -~~~ - -~~~ - database_name | table_name | user_name | client_address | session_id | lock_count -----------------+------------+-----------+-----------------+----------------------------------+------------- - bank | bank | root | 127.0.0.1:51403 | 17056d4b2f62aeb80000000000000001 | 1 - bank | bank | root | 127.0.0.1:51405 | 17056d4b2f6cc8a80000000000000001 | 1 - bank | bank | root | 127.0.0.1:51421 | 17056d4b2f8acc400000000000000001 | 1 - bank | bank | root | 127.0.0.1:51425 | 17056d4b2f96ab500000000000000001 | 1 - bank | bank | root | 127.0.0.1:51492 | 17056d4b3019be000000000000000001 | 1 - bank | bank | root | 127.0.0.1:51485 | 17056d4b3026f8b80000000000000001 | 2 -(6 rows) -~~~ - -#### Count queries waiting on locks - -Run the query below to show a list of [keys](architecture/overview.html#architecture-range) ordered by how many transactions are waiting on the locks on those keys. - -This example assumes you are running the `bank` workload as described in the [intermediate example](#cluster-locks-intermediate-example). - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT - l.database_name, - l.table_name, - l.range_id, - l.lock_key_pretty, - count(*) AS waiter_count, - max(duration) AS longest_wait_duration -FROM - crdb_internal.cluster_locks AS l -WHERE - l.granted = false AND l.table_name = 'bank' -GROUP BY - l.database_name, - l.table_name, - l.range_id, - l.lock_key_pretty -ORDER BY - waiter_count DESC, longest_wait_duration DESC -LIMIT - 30 -~~~ - -~~~ - database_name | table_name | range_id | lock_key_pretty | waiter_count | longest_wait_duration -----------------+------------+----------+--------------------+--------------+------------------------ - bank | bank | 48 | /Table/110/1/895/0 | 81 | 00:00:02.91943 - bank | bank | 54 | /Table/110/1/110/0 | 69 | 00:00:02.074832 - bank | bank | 53 | /Table/110/1/786/0 | 41 | 00:01:16.871542 - bank | bank | 54 | /Table/110/1/137/0 | 10 | 00:01:49.082237 - bank | bank | 55 | /Table/110/1/690/0 | 2 | 00:00:44.526906 - bank | bank | 50 | /Table/110/1/498/0 | 2 | 00:00:43.473285 -(6 rows) -~~~ - -### `cluster_queries` - -Column | Type | Description -------------|-----|------------ -`query_id` | `STRING` | Unique query identifier. -`txn_id` | `UUID` | Unique transaction identifier. -`node_id` | `INT8` | The ID of the node on which the query is executed. -`session_id` | `STRING` | Unique session identifier. -`user_name` | `STRING` | The name of the user that executed the query. -`start` | `TIMESTAMP` | The time that the query started. -`query` | `STRING` | The query string. -`client_address` | `STRING` | The address of the client that initiated the query. -`application_name` | `STRING` | The name of the application that initiated the query. -`distributed` | `BOOLEAN` | Whether the query is executing in a distributed cluster. -`phase` | `STRING` | The phase that the query is in. - -#### View all active queries for the `movr` application - -{% include_cached copy-clipboard.html %} -~~~sql -SELECT * FROM crdb_internal.cluster_queries where application_name = 'movr'; -~~~ -~~~ - query_id | txn_id | node_id | session_id | user_name | start | query | client_address | application_name | distributed | phase ------------------------------------+--------------------------------------+---------+----------------------------------+-----------+----------------------------+--------------------------------------------------------------------------------------------------------------------------+-----------------+------------------+-------------+------------ - 16f762fea4cb17180000000000000001 | 7d55d442-6ae6-4062-ba3b-90656e9a6544 | 1 | 16f762c2af917e800000000000000001 | root | 2022-06-10 22:30:33.907888 | UPSERT INTO vehicle_location_histories VALUES ('amsterdam', '69db0184-4192-4355-99b0-2e2abe7212c2', now(), 109.0, -45.0) | 127.0.0.1:49198 | movr | false | executing -(1 row) -~~~ - -### `cluster_sessions` - -Column | Type | Description -------------|-----|------------ -`node_id` | `INT8` | The ID of the node the session is connected to. -`session_id` | `STRING` | The ID of the session. -`user_name` | `STRING` | The name of the user that initiated the session. -`client_address` | `STRING` | The address of the client that initiated the session. -`application_name` | `STRING` | The name of the application that initiated the session. -`active_queries` | `STRING` | The SQL queries active in the session. -`last_active_query` | `STRING` | The most recently completed SQL query in the session. -`session_start` | `TIMESTAMP` | The timestamp at which the session started. -`oldest_query_start` | `TIMESTAMP` | The timestamp at which the oldest currently active SQL query in the session started. -`kv_txn` | `STRING` | The ID of the current key-value transaction for the session. -`alloc_bytes` | `INT8` | The number of bytes allocated by the session. -`max_alloc_bytes` | `INT8` | The maximum number of bytes allocated by the session. - -#### View all open SQL sessions for the `movr` application - -{% include_cached copy-clipboard.html %} -~~~sql -SELECT * FROM crdb_internal.cluster_sessions where application_name = 'movr'; -~~~ -~~~ - node_id | session_id | user_name | client_address | application_name | active_queries | last_active_query | session_start | oldest_query_start | kv_txn | alloc_bytes | max_alloc_bytes -----------+----------------------------------+-----------+-----------------+------------------+------------------------------------------------------------+-----------------------------------------------+---------------------------+----------------------------+--------------------------------------+-------------+------------------ - 1 | 16f762c2af917e800000000000000001 | root | 127.0.0.1:49198 | movr | SELECT city, id FROM vehicles WHERE city = 'washington dc' | SELECT city, id FROM vehicles WHERE city = $1 | 2022-06-10 22:26:16.39059 | 2022-06-10 22:28:00.646594 | 7883cbe3-7cf3-4155-a1a8-82d1211c9ffa | 133120 | 163840 -(1 row) -~~~ - -### `cluster_transactions` - -Column | Type | Description -------------|-----|------------ -`id` | `UUID` | The unique ID that identifies the transaction. -`node_id` | `INT8` | The ID of the node the transaction is connected to. -`session_id` | `STRING` | The ID of the session running the transaction. -`start` | `TIMESTAMP` | The time the transaction started. -`txn_string` | `STRING` | The transaction string. -`application_name` | `STRING` | The name of the application that ran the transaction. -`num_stmts` | `INT8` | The number of statements in the transaction. -`num_retries` | `INT8` | The number of times the transaction was retried. -`num_auto_retries` | `INT8` | The number of times the transaction was automatically retried. - -#### View all active transactions for the `movr` application - -{% include_cached copy-clipboard.html %} -~~~sql -SELECT * FROM crdb_internal.cluster_transactions where application_name = 'movr'; -~~~ -~~~ - id | node_id | session_id | start | txn_string | application_name | num_stmts | num_retries | num_auto_retries ----------------------------------------+---------+----------------------------------+----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------+-----------+-------------+------------------- - 9cfb2bbe-e1d8-4650-95d5-7a3fd052c6a7 | 1 | 16f762c2af917e800000000000000001 | 2022-06-10 22:26:20.370946 | "sql txn" meta={id=9cfb2bbe key=/Table/109/1/"rome"/"\xbf\x92\xe2<̯D\v\x94M\x83b0\x1b-\x82"/2022-06-10T22:26:20.370935Z/0 pri=0.00109794 epo=0 ts=1654899980.370940000,0 min=1654899980.370940000,0 seq=1} lock=true stat=PENDING rts=1654899980.370940000,0 wto=false gul=1654899980.870940000,0 | movr | 1 | 0 | 0 -(1 row) -~~~ - -### `index_usage_statistics` - -Contains one row for each index in the current database surfacing usage statistics for that specific index. This view is updated every time a transaction is committed. Each user-submitted statement on the specified index is counted as a use of that index and increments corresponding counters in this view. System and internal queries (such as scans for gathering statistics) are not counted. - -Column | Type | Description -------------|-----|------------ -`table_id` | `INT8` | Unique table identifier. -`index_id` | `INT8` | Unique index identifier. -`total_reads` | `INT8` | Number of times an index was selected for a read. -`last_read` | `TIMESTAMPTZ` | Time of last read. - -You can reset the index usages statistics by invoking the function `crdb_internal.reset_index_usage_stats()`. For example: - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT crdb_internal.reset_index_usage_stats(); -~~~ - -~~~ - crdb_internal.reset_index_usage_stats ------------------------------------------ - true -(1 row) -~~~ - -#### View index statistics by table and index name - -To view index usage statistics by table and index name, join with `table_indexes`: - -{% include_cached copy-clipboard.html %} -~~~sql -SELECT ti.descriptor_name as table_name, ti.index_name, total_reads -FROM crdb_internal.index_usage_statistics AS us -JOIN crdb_internal.table_indexes ti -ON us.index_id = ti.index_id AND us.table_id = ti.descriptor_id -ORDER BY total_reads desc; -~~~ - -~~~ - table_name | index_name | total_reads ------------------------------+-----------------------------------------------+-------------- - vehicles | vehicles_auto_index_fk_city_ref_users | 412266 - rides | rides_pkey | 216137 - users | users_pkey | 25709 - vehicles | vehicles_pkey | 17185 - promo_codes | promo_codes_pkey | 4274 - user_promo_codes | user_promo_codes_pkey | 2138 - rides | rides_auto_index_fk_city_ref_users | 1 - rides | rides_auto_index_fk_vehicle_city_ref_vehicles | 1 - vehicle_location_histories | vehicle_location_histories_pkey | 1 -(9 rows) -~~~ - -#### Determine which indexes haven't been used in the past week - -To determine if there are indexes that have become stale and are no longer needed, show which indexes haven't been used during the past week with the following query: - -{% include_cached copy-clipboard.html %} -~~~sql -SELECT ti.descriptor_name as table_name, ti.index_name, total_reads -FROM crdb_internal.index_usage_statistics AS us -JOIN crdb_internal.table_indexes ti -ON us.index_id = ti.index_id AND us.table_id = ti.descriptor_id -WHERE last_read < NOW() - INTERVAL '1 WEEK' -ORDER BY total_reads desc; -~~~ - -#### Determine which indexes are no longer used - -View which indexes are no longer used with the following query: - -{% include_cached copy-clipboard.html %} -~~~sql -SELECT ti.descriptor_name as table_name, ti.index_name, total_reads -FROM crdb_internal.index_usage_statistics AS us -JOIN crdb_internal.table_indexes ti -ON us.index_id = ti.index_id AND us.table_id = ti.descriptor_id -WHERE total_reads = 0; -~~~ - -### `statement_statistics` - -Column | Type | Description -------------|-----|------------ -`aggregated_ts` | `TIMESTAMPTZ NOT NULL` | The time that statistics aggregation started. -`fingerprint_id` | `BYTES NOT NULL` | Unique identifier of the statement statistics. This is constructed using the statement fingerprint text, and statement metadata (e.g., query type, database name, etc.) -`transaction_fingerprint_id` | `BYTES NOT NULL` | Uniquely identifies a transaction statistics. The transaction fingerprint ID that this statement statistic belongs to. -`plan_hash` | `BYTES NOT NULL` | Uniquely identifies a query plan that was executed by the current statement. The query plan can be retrieved from the `sampled_plan` column. -`app_name` | `STRING NOT NULL`| The name of the application that executed the statement. -`metadata` | `JSONB NOT NULL` | Metadata that describes the statement. See [`metadata` column](#metadata-column). -`statistics` | `JSONB NOT NULL` | Statistics for the statement. See [`statistics` column](#statistics-column). -`sampled_plan` | `JSONB NOT NULL` | The sampled query plan of the current statement statistics. This column is unfilled if there is no sampled query plan. -`aggregation_interval` | `INTERVAL NOT NULL` | The interval over which statistics are aggregated. - -#### `metadata` column - -Field | Type | Description -------------|-----|------------ -`db` | `STRING` | The database on which the statement is executed. -`distsql` | `BOOLEAN` | Whether the statement is being executed by the Distributed SQL (DistSQL) engine. -`failed` | `BOOLEAN` | Whether the statement execution failed. -`fullScan` | `BOOLEAN` | Whether the statement performed a full scan of the table. -`implicitTxn` | `BOOLEAN` | Whether the statement executed in an implicit transaction. -`query` | `STRING` | The statement string. -`querySummary` | `STRING` | The statement string summary. -`stmtTyp` | `STRING` | The type of SQL statement: `"TypeDDL"`, `"TypeDML"`, `"TypeDCL"`, or `"TypeTCL"`. These types map to the CockroachDB statement types [data definition language (DDL)](sql-statements.html#data-definition-statements), [data manipulation language (DML)](sql-statements.html#data-manipulation-statements), [data control language (DCL)](sql-statements.html#data-control-statements), and [transaction control language (TCL)](sql-statements.html#transaction-control-statements). -`vec` | `BOOLEAN` | Whether the statement executed in the vectorized query engine. - -#### `statistics` column - -The [DB Console](ui-overview.html) [Statements](ui-statements-page.html) and [Statement Fingerprint](ui-statements-page.html#statement-fingerprint-page) pages display information from `statistics`. - -The `statistics` column contains a JSONB object with `statistics` and `execution_statistics` subobjects. [`statistics`](ui-statements-page.html#statement-statistics) are always populated and are updated each time a new statement of that statement fingerprint is executed. [`execution_statistics`](ui-statements-page.html#charts) are collected using sampling. CockroachDB probabilistically runs a query with tracing enabled to collect fine-grained statistics of the query execution. - -The `NumericStat` type tracks two running values: the running mean `mean` and the running sum of squared differences `sqDiff` from the mean. You can use these statistics along with the total number of values to compute the variance using Welford's method. CockroachDB computes the variance and displays it along with `mean` in the [Statements table](ui-statements-page.html#statements-table). - -Field | Type | Description -------------|-----|------------ -`execution_statistics -> cnt` | `INT64` | The number of times execution statistics were recorded. -execution_statistics -> contentionTime -> [mean|sqDiff] | `NumericStat` | The time the statement spent contending for resources before being executed. -execution_statistics -> maxDiskUsage -> [mean|sqDiff] | `NumericStat` | The maximum temporary disk usage that occurred while executing this statement. This is set in cases where a query had to spill to disk, e.g., when performing a large sort where not all of the tuples fit in memory. -execution_statistics -> maxMemUsage -> [mean|sqDiff] | `NumericStat` | The maximum memory usage that occurred on a node. -execution_statistics -> networkBytes -> [mean|sqDiff] | `NumericStat` | The number of bytes sent over the network. -execution_statistics -> networkMsgs -> [mean|sqDiff] | `NumericStat` | The number of messages sent over the network. -statistics -> bytesRead -> [mean|sqDiff] | `NumericStat` | The number of bytes read from disk. -`statistics -> cnt` | `INT8` | The total number of times this statement was executed since the begin of the aggregation period. -`statistics -> firstAttemptCnt` | `INT8` | The total number of times a first attempt was executed (either the one time in explicitly committed statements, or the first time in implicitly committed statements with implicit retries). -`statistics -> lastExecAt` | `TIMESTAMP` | The last timestamp the statement was executed. -`statistics -> maxRetries` | `INT8` | The maximum observed number of automatic retries in the aggregation period. -`statistics -> nodes` | Array of `INT64` | An ordered list of nodes IDs on which the statement was executed. -statistics -> numRows -> [mean|sqDiff] | `NumericStat` | The number of rows returned or observed. -statistics -> ovhLat -> [mean|sqDiff] | `NumericStat` | The difference between `svcLat` and the sum of `parseLat+planLat+runLat` latencies. -statistics -> parseLat -> [mean|sqDiff] | `NumericStat` | The time to transform the SQL string into an abstract syntax tree (AST). -statistics -> planGists | `String` | **New in v22.1:** A sequence of bytes representing the flattened tree of operators and various operator specific metadata of the statement plan. -statistics -> planLat -> [mean|sqDiff] | `NumericStat` | The time to transform the AST into a logical query plan. -statistics -> rowsRead -> [mean|sqDiff] | `NumericStat` | The number of rows read from disk. -statistics -> rowsWritten -> [mean|sqDiff] | `NumericStat` | The number of rows written to disk. -statistics -> runLat -> [mean|sqDiff] | `NumericStat` | The time to run the query and fetch or compute the result rows. -statistics -> svcLat -> [mean|sqDiff] | `NumericStat` | The time to service the query, from start of parse to end of execute. - -#### View historical statement statistics and the sampled logical plan per fingerprint - -This example command shows how to query the two most important JSON columns: `metadata` and `statistics`. It displays -the first 60 characters of query text, statement statistics, and sampled plan for DDL and DML statements for the [`movr`](movr.html) demo database: - -{% include_cached copy-clipboard.html %} -~~~sql -SELECT substring(metadata ->> 'query',1,60) AS statement_text, - metadata ->> 'stmtTyp' AS statement_type, - metadata -> 'distsql' AS is_distsql, - metadata -> 'fullScan' AS has_full_scan, - metadata -> 'vec' AS used_vec, - statistics -> 'execution_statistics' -> 'contentionTime' -> 'mean' AS contention_time_mean, - statistics -> 'statistics' -> 'cnt' AS execution_count, - statistics -> 'statistics' -> 'firstAttemptCnt' AS num_first_attempts, - statistics -> 'statistics' -> 'numRows' -> 'mean' AS num_rows_returned_mean, - statistics -> 'statistics' -> 'rowsRead' -> 'mean' AS num_rows_read_mean, - statistics -> 'statistics' -> 'runLat' -> 'mean' AS runtime_latency_mean, - jsonb_pretty(sampled_plan) AS sampled_plan -FROM movr.crdb_internal.statement_statistics -WHERE metadata @> '{"db":"movr"}' AND (metadata @> '{"stmtTyp":"TypeDDL"}' OR metadata @> '{"stmtTyp":"TypeDML"}') LIMIT 20; -~~~ -~~~ - statement_text | statement_type | is_distsql | has_full_scan | used_vec | contention_time_mean | execution_count | num_first_attempts | num_rows_returned_mean | num_rows_read_mean | runtime_latency_mean | sampled_plan ----------------------------------------------------------------+----------------+------------+---------------+----------+----------------------+-----------------+--------------------+------------------------+--------------------+----------------------+-------------------------------------------------------------------------------------------------------------------------------- - ALTER TABLE rides ADD FOREIGN KEY (city, rider_id) REFERENCE | TypeDDL | false | false | true | 0 | 1 | 1 | 0 | 0 | 0.007348 | { - | | | | | | | | | | | "Children": [], - | | | | | | | | | | | "Name": "alter table" - | | | | | | | | | | | } - ALTER TABLE rides ADD FOREIGN KEY (vehicle_city, vehicle_id) | TypeDDL | false | false | true | 0 | 1 | 1 | 0 | 0 | 0.006618 | { - | | | | | | | | | | | "Children": [], - | | | | | | | | | | | "Name": "alter table" - | | | | | | | | | | | } - ALTER TABLE rides SCATTER FROM ('_', '_') TO ('_', '_') | TypeDML | false | false | true | 0 | 8 | 8 | 1 | 0 | 0.00066175 | { - | | | | | | | | | | | "Children": [], - | | | | | | | | | | | "Name": "scatter" - | | | | | | | | | | | } - ALTER TABLE rides SPLIT AT VALUES ('_', '_') | TypeDML | false | false | true | 0 | 8 | 8 | 1 | 0 | 0.031441875 | { - | | | | | | | | | | | "Children": [ - | | | | | | | | | | | { - | | | | | | | | | | | "Children": [], - | | | | | | | | | | | "Name": "values", - | | | | | | | | | | | "Size": "2 columns, 1 row" - | | | | | | | | | | | } - | | | | | | | | | | | ], - | | | | | | | | | | | "Name": "split" - | | | | | | | | | | | } - ALTER TABLE user_promo_codes ADD FOREIGN KEY (city, user_id) | TypeDDL | false | false | true | 0 | 1 | 1 | 0 | 0 | 0.008143 | { - | | | | | | | | | | | "Children": [], - | | | | | | | | | | | "Name": "alter table" - | | | | | | | | | | | } - ALTER TABLE users SCATTER FROM ('_', '_') TO ('_', '_') | TypeDML | false | false | true | 0 | 8 | 8 | 1 | 0 | 0.001272 | { - | | | | | | | | | | | "Children": [], - | | | | | | | | | | | "Name": "scatter" - | | | | | | | | | | | } - ALTER TABLE users SPLIT AT VALUES ('_', '_') | TypeDML | false | false | true | 0 | 8 | 8 | 1 | 0 | 0.179651125 | { - | | | | | | | | | | | "Children": [ - | | | | | | | | | | | { - | | | | | | | | | | | "Children": [], - | | | | | | | | | | | "Name": "values", - | | | | | | | | | | | "Size": "2 columns, 1 row" - | | | | | | | | | | | } - | | | | | | | | | | | ], - | | | | | | | | | | | "Name": "split" - | | | | | | | | | | | } - ALTER TABLE vehicle_location_histories ADD FOREIGN KEY (city | TypeDDL | false | false | true | 0 | 1 | 1 | 0 | 0 | 0.007684 | { - | | | | | | | | | | | "Children": [], - | | | | | | | | | | | "Name": "alter table" - | | | | | | | | | | | } - ALTER TABLE vehicles ADD FOREIGN KEY (city, owner_id) REFERE | TypeDDL | false | false | true | 0 | 1 | 1 | 0 | 0 | 0.004085 | { - | | | | | | | | | | | "Children": [], - | | | | | | | | | | | "Name": "alter table" - | | | | | | | | | | | } - ALTER TABLE vehicles SCATTER FROM ('_', '_') TO ('_', '_') | TypeDML | false | false | true | 0 | 8 | 8 | 1 | 0 | 0.000702 | { - | | | | | | | | | | | "Children": [], - | | | | | | | | | | | "Name": "scatter" - | | | | | | | | | | | } - ALTER TABLE vehicles SPLIT AT VALUES ('_', '_') | TypeDML | false | false | true | 0 | 8 | 8 | 1 | 0 | 0.008966375 | { - | | | | | | | | | | | "Children": [ - | | | | | | | | | | | { - | | | | | | | | | | | "Children": [], - | | | | | | | | | | | "Name": "values", - | | | | | | | | | | | "Size": "2 columns, 1 row" - | | | | | | | | | | | } - | | | | | | | | | | | ], - | | | | | | | | | | | "Name": "split" - | | | | | | | | | | | } - CREATE DATABASE movr | TypeDDL | false | false | true | 0 | 1 | 1 | 0 | 0 | 0.001397 | { - | | | | | | | | | | | "Children": [], - | | | | | | | | | | | "Name": "create database" - | | | | | | | | | | | } - CREATE TABLE IF NOT EXISTS promo_codes (code VARCHAR NOT NUL | TypeDDL | false | false | true | 0 | 1 | 1 | 0 | 0 | 0.001789 | { - | | | | | | | | | | | "Children": [], - | | | | | | | | | | | "Name": "create table" - | | | | | | | | | | | } - CREATE TABLE IF NOT EXISTS rides (id UUID NOT NULL, city VAR | TypeDDL | false | false | true | 0 | 1 | 1 | 0 | 0 | 0.002374 | { - | | | | | | | | | | | "Children": [], - | | | | | | | | | | | "Name": "create table" - | | | | | | | | | | | } - CREATE TABLE IF NOT EXISTS user_promo_codes (city VARCHAR NO | TypeDDL | false | false | true | 0 | 1 | 1 | 0 | 0 | 0.006318 | { - | | | | | | | | | | | "Children": [], - | | | | | | | | | | | "Name": "create table" - | | | | | | | | | | | } - CREATE TABLE IF NOT EXISTS users (id UUID NOT NULL, city VAR | TypeDDL | false | false | true | 0 | 1 | 1 | 0 | 0 | 0.002014 | { - | | | | | | | | | | | "Children": [], - | | | | | | | | | | | "Name": "create table" - | | | | | | | | | | | } - CREATE TABLE IF NOT EXISTS vehicle_location_histories (city | TypeDDL | false | false | true | 0 | 1 | 1 | 0 | 0 | 0.001906 | { - | | | | | | | | | | | "Children": [], - | | | | | | | | | | | "Name": "create table" - | | | | | | | | | | | } - CREATE TABLE IF NOT EXISTS vehicles (id UUID NOT NULL, city | TypeDDL | false | false | true | 0 | 1 | 1 | 0 | 0 | 0.003346 | { - | | | | | | | | | | | "Children": [], - | | | | | | | | | | | "Name": "create table" - | | | | | | | | | | | } - INSERT INTO promo_codes VALUES ($1, $2, __more3__), (__more9 | TypeDML | false | false | true | 0 | 250 | 250 | 1E+3 | 0 | 0.010470284000000002 | { - | | | | | | | | | | | "Auto Commit": "", - | | | | | | | | | | | "Children": [], - | | | | | | | | | | | "Into": "promo_codes(code, description, creation_time, expiration_time, rules)", - | | | | | | | | | | | "Name": "insert fast path", - | | | | | | | | | | | "Size": "5 columns, 1000 rows" - | | | | | | | | | | | } - INSERT INTO rides VALUES ($1, $2, __more8__), (__more900__) | TypeDML | false | false | true | 0 | 125 | 125 | 1E+3 | 0 | 0.054189928000000005 | { - | | | | | | | | | | | "Auto Commit": "", - | | | | | | | | | | | "Children": [ - | | | | | | | | | | | { - | | | | | | | | | | | "Children": [ - | | | | | | | | | | | { - | | | | | | | | | | | "Children": [], - | | | | | | | | | | | "Name": "values", - | | | | | | | | | | | "Size": "10 columns, 1000 rows" - | | | | | | | | | | | } - | | | | | | | | | | | ], - | | | | | | | | | | | "Estimated Row Count": "1,000", - | | | | | | | | | | | "Name": "render" - | | | | | | | | | | | } - | | | | | | | | | | | ], - | | | | | | | | | | | "Into": "rides(id, city, vehicle_city, rider_id, vehicle_id, start_address, end_address, start_time, end_time, revenue)", - | | | | | | | | | | | "Name": "insert" - | | | | | | | | | | | } - -~~~ - -#### Detect suboptimal and regressed plans - -{% include_cached new-in.html version="v22.1" %} Historical plans are stored in plan gists in `statistics->'statistics'->'planGists'`. To detect suboptimal and regressed plans over time you can compare plans for the same query by extracting them from the plan gists. - -Suppose you wanted to compare plans of the following query: - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT - name, count(rides.id) AS sum -FROM - users JOIN rides ON users.id = rides.rider_id -WHERE - rides.start_time BETWEEN '2018-12-31 00:00:00' AND '2020-01-01 00:00:00' -GROUP BY - name -ORDER BY - sum DESC -LIMIT - 10; -~~~ - -To decode plan gists, use the `crdb_internal.decode_plan_gist` function, as shown in the following query. The example shows the performance impact of adding an [index on the `start_time` column in the `rides` table](apply-statement-performance-rules.html#rule-2-use-the-right-index). The first row of the output shows the improved performance (reduced number of rows read and latency) after the index was added. The second row shows the query, which performs a full scan on the `rides` table, before the index was added. - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT -substring(metadata ->> 'query',1,60) AS statement_text, - string_agg( crdb_internal.decode_plan_gist(statistics->'statistics'->'planGists'->>0), ' - ') AS plan, - max(aggregated_ts) as timestamp_interval, - max(statistics -> 'statistics' -> 'rowsRead' -> 'mean') AS num_rows_read_mean, - max(statistics -> 'statistics' -> 'runLat' -> 'mean') AS runtime_latency_mean, - statistics->'statistics'->'planGists'->>0 as plan_id -FROM movr.crdb_internal.statement_statistics -WHERE substring(metadata ->> 'query',1,35)='SELECT name, count(rides.id) AS sum' -group by metadata ->> 'query', statistics->'statistics'->'planGists'->>0; -~~~ - -~~~ - statement_text | plan | timestamp_interval | num_rows_read_mean | runtime_latency_mean | plan_id ----------------------------------------------------------------+-----------------------------------------------------+------------------------+--------------------+----------------------+--------------------------------------------------- - SELECT name, count(rides.id) AS sum FROM users JOIN rides ON | • top-k | 2022-04-12 22:00:00+00 | 24786 | 0.028525 | AgHYAQgAiAECAAAB1AEEAAUAAAAJAAICAAAFAgsCGAYE - | │ order | | | | - | │ | | | | - | └── • group (hash) | | | | - | │ group by: rider_id | | | | - | │ | | | | - | └── • hash join | | | | - | │ equality: (rider_id) = (id) | | | | - | │ | | | | - | ├── • scan | | | | - | │ table: rides@rides_start_time_idx | | | | - | │ spans: 1 span | | | | - | │ | | | | - | └── • scan | | | | - | table: users@users_city_id_name_key | | | | - | spans: FULL SCAN | | | | - SELECT name, count(rides.id) AS sum FROM users JOIN rides ON | • top-k | 2022-04-12 22:00:00+00 | 1.375E+5 | 0.279083 | AgHYAQIAiAEAAAADAdQBBAAFAAAACQACAgAABQILAhgGBA== - | │ order | | | | - | │ | | | | - | └── • group (hash) | | | | - | │ group by: rider_id | | | | - | │ | | | | - | └── • hash join | | | | - | │ equality: (rider_id) = (id) | | | | - | │ | | | | - | ├── • filter | | | | - | │ │ | | | | - | │ └── • scan | | | | - | │ table: rides@rides_pkey | | | | - | │ spans: FULL SCAN | | | | - | │ | | | | - | └── • scan | | | | - | table: users@users_city_id_name_key | | | | - | spans: FULL SCAN | | | | -(2 rows) -~~~ - -### `transaction_contention_events` - -{% include_cached new-in.html version="v22.1" %} Contains one row for each transaction [contention](performance-best-practices-overview.html#transaction-contention) event. - -Requires either the `VIEWACTIVITY` or `VIEWACTIVITYREDACTED` [role option](alter-role.html#role-options) to access. If you have the `VIEWACTIVITYREDACTED` role, `contending_key` will be redacted. - -Contention events are stored in memory. You can control the amount of contention events stored per node via the `sql.contention.event_store.capacity` [cluster setting](cluster-settings.html). - -The `sql.contention.event_store.duration_threshold` [cluster setting](cluster-settings.html) specifies the minimum contention duration to cause the contention events to be collected into the `crdb_internal.transaction_contention_events` table. The default value is `0`. If contention event collection is overwhelming the CPU or memory you can raise this value to reduce the load. - -Column | Type | Description -------------|-----|------------ -`collection_ts` | `TIMESTAMPTZ NOT NULL` | The timestamp when the transaction [contention](performance-best-practices-overview.html#transaction-contention) event was collected. -`blocking_txn_id` | `UUID NOT NULL` | The ID of the blocking transaction. You can join this column into the [`cluster_contention_events`](#cluster_contention_events) table. -`blocking_txn_fingerprint_id` | `BYTES NOT NULL`| The ID of the blocking transaction fingerprint. To surface historical information about the transactions that caused the [contention](performance-best-practices-overview.html#transaction-contention), you can join this column into the [`statement_statistics`](#statement_statistics) and [`transaction_statistics`](#transaction_statistics) tables to surface historical information about the transactions that caused the contention. -`waiting_txn_id` | `UUID NOT NULL` | The ID of the waiting transaction. You can join this column into the [`cluster_contention_events`](#cluster_contention_events) table. -`waiting_txn_fingerprint_id` | `BYTES NOT NULL` | The ID of the waiting transaction fingerprint. To surface historical information about the transactions that caused the [contention](performance-best-practices-overview.html#transaction-contention), you can join this column into the [`statement_statistics`](#statement_statistics) and [`transaction_statistics`](#transaction_statistics) tables. -`contention_duration` | `INTERVAL NOT NULL` | The interval of time the waiting transaction spent waiting for the blocking transaction. -`contending_key` | `BYTES NOT NULL` | The key on which the transactions contended. - -#### Example - -The following example shows how to join the `transaction_contention_events` table with `transaction_statistics` and `statement_statistics` tables to extract blocking and waiting transaction information. - -1. Display contention table removing in-progress transactions. - - {% include_cached copy-clipboard.html %} - ~~~ sql - SELECT - collection_ts, - blocking_txn_id, - encode(blocking_txn_fingerprint_id, 'hex') as blocking_txn_fingerprint_id, - waiting_txn_id, - encode(waiting_txn_fingerprint_id, 'hex') as waiting_txn_fingerprint_id - FROM - crdb_internal.transaction_contention_events - WHERE - encode(blocking_txn_fingerprint_id, 'hex') != '0000000000000000' AND encode(waiting_txn_fingerprint_id, 'hex') != '0000000000000000'; - ~~~ - - ~~~ - collection_ts | blocking_txn_id | blocking_txn_fingerprint_id | waiting_txn_id | waiting_txn_fingerprint_id - --------------------------------+--------------------------------------+-----------------------------+--------------------------------------+----------------------------- - 2022-04-11 23:41:56.951687+00 | 921e3d5b-22ab-4a94-a7a4-407e143cfa73 | 79ac4a19cff03b60 | 74ac5efa-a1e4-4c24-a648-58b82a192f9d | b7a98a63d6932458 - 2022-04-12 22:55:55.968825+00 | 25c75267-c091-44d4-8c33-8f5247409da5 | f07b4a806f8b7a2e | 5397acb0-69f3-4c5c-b7a3-75d51180df44 | b7a98a63d6932458 - (2 rows) - ~~~ - -1. Display counts for each blocking and waiting transaction fingerprint pair. - - {% include_cached copy-clipboard.html %} - ~~~ sql - SELECT - encode(hce.blocking_txn_fingerprint_id, 'hex') as blocking_txn_fingerprint_id, - encode(hce.waiting_txn_fingerprint_id, 'hex') as waiting_txn_fingerprint_id, - count(*) AS contention_count - FROM - crdb_internal.transaction_contention_events hce - WHERE - encode(blocking_txn_fingerprint_id, 'hex') != '0000000000000000' AND encode(waiting_txn_fingerprint_id, 'hex') != '0000000000000000' - GROUP BY - hce.blocking_txn_fingerprint_id, hce.waiting_txn_fingerprint_id - ORDER BY - contention_count - DESC; - ~~~ - - ~~~ - blocking_txn_fingerprint_id | waiting_txn_fingerprint_id | contention_count - ------------------------------+----------------------------+------------------- - 79ac4a19cff03b60 | b7a98a63d6932458 | 1 - f07b4a806f8b7a2e | b7a98a63d6932458 | 1 - (3 rows) - ~~~ - -1. Join to show blocking statements text. - - {% include_cached copy-clipboard.html %} - ~~~ sql - SELECT DISTINCT - hce.blocking_statement, - substring(ss2.metadata ->> 'query', 1, 60) AS waiting_statement, - hce.contention_count - FROM (SELECT - blocking_txn_fingerprint_id, - waiting_txn_fingerprint_id, - contention_count, - substring(ss.metadata ->> 'query', 1, 60) AS blocking_statement - FROM (SELECT - encode(blocking_txn_fingerprint_id, 'hex') as blocking_txn_fingerprint_id, - encode(waiting_txn_fingerprint_id, 'hex') as waiting_txn_fingerprint_id, - count(*) AS contention_count - FROM - crdb_internal.transaction_contention_events - GROUP BY - blocking_txn_fingerprint_id, waiting_txn_fingerprint_id - ), - crdb_internal.statement_statistics ss - WHERE - blocking_txn_fingerprint_id = encode(ss.transaction_fingerprint_id, 'hex')) hce, - crdb_internal.statement_statistics ss2 - WHERE - hce.blocking_txn_fingerprint_id != '0000000000000000' AND - hce.waiting_txn_fingerprint_id != '0000000000000000' AND - hce.waiting_txn_fingerprint_id = encode(ss2.transaction_fingerprint_id, 'hex') - ORDER BY - contention_count - DESC; - ~~~ - - ~~~ - blocking_statement | waiting_statement | contention_count - --------------------------------------------------------+--------------------------------------------------------------+------------------- - CREATE UNIQUE INDEX ON users (city, id, name) | SELECT status, payload, progress, crdb_internal.sql_liveness | 1 - CREATE INDEX ON rides (start_time) STORING (rider_id) | SELECT status, payload, progress, crdb_internal.sql_liveness | 1 - (2 rows) - ~~~ - -### `transaction_statistics` - -Column | Type | Description -------------|-----|------------ -`aggregated_ts` | `TIMESTAMPTZ` | The time that statistics aggregation started. -`fingerprint_id` | `BYTES` | The ID of the transaction fingerprint. -`app_name` | `STRING`| The name of the application that executed the transaction. -`metadata` | `JSONB` | Metadata that describes the transaction. See [`metadata` column](#metadata-column). -`statistics` | `JSONB` | Statistics for the transaction. See [`statistics` column](#statistics-column). -`aggregation_interval` | `INTERVAL` | The interval of time over which statistics are aggregated. - -#### View historical transaction statistics per fingerprint - -This example command shows how to query the two most important JSON columns: `metadata` and `statistics`. It displays -the statistics for transactions on the [`movr`](movr.html) demo database: - -{% include_cached copy-clipboard.html %} -~~~sql -SELECT - metadata -> 'stmtFingerprintIDs' AS statement_fingerprint_id, - statistics -> 'execution_statistics' -> 'cnt' AS execution_count, - statistics -> 'execution_statistics' -> 'contentionTime' -> 'mean' AS contention_time_mean, - statistics -> 'execution_statistics' -> 'maxDiskUsage' -> 'mean' AS max_disk_usage_mean, - statistics -> 'execution_statistics' -> 'maxMemUsage' -> 'mean' AS max_mem_usage_mean, - statistics -> 'execution_statistics' -> 'networkBytes' -> 'mean' AS num_ntwk_bytes_mean, - statistics -> 'execution_statistics' -> 'networkMsgs' -> 'mean' AS num_ntwk_msgs_mean, - statistics -> 'statistics' -> 'bytesRead' -> 'mean' AS bytes_read_mean, - statistics -> 'statistics' -> 'cnt' AS count, - statistics -> 'statistics' -> 'commitLat' -> 'mean' AS commit_lat_mean, - statistics -> 'statistics' -> 'maxRetries' AS max_retries, - statistics -> 'statistics' -> 'numRows' -> 'mean' AS num_rows_mean, - statistics -> 'statistics' -> 'retryLat' -> 'mean' AS retry_latency_mean, - statistics -> 'statistics' -> 'rowsRead' -> 'mean' AS num_rows_read_mean, - statistics -> 'statistics' -> 'rowsWritten' -> 'mean' AS num_rows_written_mean, - statistics -> 'statistics' -> 'svcLat' -> 'mean' AS service_lat_mean -FROM crdb_internal.transaction_statistics WHERE app_name = 'movr' LIMIT 20; -~~~ -~~~ - statement_fingerprint_id | execution_count | contention_time_mean | max_disk_usage_mean | max_mem_usage_mean | num_ntwk_bytes_mean | num_ntwk_msgs_mean | bytes_read_mean | count | commit_lat_mean | max_retries | num_rows_mean | retry_latency_mean | num_rows_read_mean | num_rows_written_mean | service_lat_mean ----------------------------+-----------------+----------------------+---------------------+--------------------+---------------------+--------------------+--------------------+-------+--------------------------+-------------+---------------+--------------------+--------------------+-----------------------+------------------------ - ["ae6bf00068ea788b"] | 7 | 0 | 0 | 2.048E+4 | 0 | 0 | 299.35812133072403 | 511 | 0.00000699021526418786 | 0 | 1 | 0 | 1.9315068493150669 | 0 | 0.0020743385518591003 - ["ae6bf00068ea788b"] | 7 | 0 | 0 | 2.048E+4 | 0 | 0 | 300.61684210526295 | 475 | 0.00000655368421052631 | 0 | 1 | 0 | 1.9389473684210534 | 0 | 0.0019613578947368414 - ["bd6cff84f3c76319"] | 6 | 0 | 0 | 2.048E+4 | 0 | 0 | 215.77310924369766 | 714 | 0.000008922969187675072 | 0 | 1 | 0 | 1.9621848739495786 | 0 | 0.00228533193277311 - ["bd6cff84f3c76319"] | 7 | 0 | 0 | 2.048E+4 | 0 | 0 | 214.7635658914728 | 774 | 0.0000071511627906976775 | 0 | 1 | 0 | 1.9547803617571062 | 0 | 0.002103399224806201 - ["cfc8fc0503422c76"] | 3 | 0 | 0 | 1.024E+4 | 0 | 0 | 0 | 368 | 0.0013085163043478267 | 0 | 1 | 0 | 0 | 1 | 0.001331747282608696 - ["cfc8fc0503422c76"] | 4 | 0 | 0 | 1.024E+4 | 0 | 0 | 0 | 361 | 0.0011630997229916886 | 0 | 1 | 0 | 0 | 1 | 0.0019714072022160665 - ["dc9d9b4fcdd7511e"] | 1 | 0 | 0 | 4.096E+4 | 152 | 3 | 0 | 116 | 0.000006956896551724138 | 0 | 1 | 0 | 0 | 0 | 0.0014110603448275855 - ["dc9d9b4fcdd7511e"] | 1 | 0 | 0 | 4.096E+4 | 152 | 3 | 0 | 126 | 0.000006730158730158729 | 0 | 1 | 0 | 0 | 0 | 0.0013825634920634914 - ["22295b56d9b279f5"] | 4 | 0 | 0 | 1.024E+4 | 0 | 0 | 0 | 140 | 0.000007021428571428573 | 0 | 1 | 0 | 0 | 1 | 0.0021642071428571432 - ["22295b56d9b279f5"] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 116 | 0.000008215517241379309 | 0 | 1 | 0 | 0 | 1 | 0.0021244137931034483 - ["051aca13769620d3"] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 95 | 0.0012012000000000008 | 0 | 1 | 0 | 1 | 1 | 0.002633694736842105 - ["051aca13769620d3"] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 93 | 0.001323118279569892 | 0 | 1 | 0 | 1 | 1 | 0.0024401720430107525 - ["1b8e962ebb4b2c5c"] | 2 | 0 | 0 | 1.024E+4 | 0 | 0 | 0 | 132 | 0.0011926818181818182 | 0 | 1 | 0 | 0 | 1 | 0.0023831893939393945 - ["1b8e962ebb4b2c5c"] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 110 | 0.0013019909090909092 | 0 | 1 | 0 | 0 | 1 | 0.0026221727272727276 - ["15489d7704332101"] | 125 | 0 | 0 | 5.12E+4 | 0 | 0 | 0 | 12199 | 0.0014255228297401419 | 0 | 1 | 0 | 1 | 1 | 0.004390958439216331 - ["15489d7704332101"] | 114 | 0 | 0 | 5.12E+4 | 0 | 0 | 0 | 12380 | 0.0014023091276251975 | 0 | 1 | 0 | 1 | 1 | 0.004440339660743126 - ["1165541b8979eb40"] | 1 | 0 | 0 | 3.072E+4 | 208 | 3 | 482.75193798449624 | 129 | 0.0000073410852713178325 | 0 | 1 | 0 | 1.984496124031008 | 0 | 0.002383465116279069 - ["1165541b8979eb40"] | 1 | 0 | 0 | 2.048E+4 | 0 | 0 | 473.58394160583924 | 137 | 0.000006656934306569342 | 0 | 1 | 0 | 1.9416058394160585 | 0 | 0.001756532846715328 - ["485a374e9e1c11d0"] | 11 | 0 | 0 | 1.024E+4 | 0 | 0 | 0 | 480 | 0.0014361895833333342 | 0 | 1 | 0 | 0 | 1 | 0.0027399666666666684 - ["485a374e9e1c11d0"] | 9 | 0 | 0 | 1.024E+4 | 0 | 0 | 0 | 465 | 0.0011626645161290328 | 0 | 1 | 0 | 0 | 1 | 0.0026638344086021525 -~~~ - -## See also - -- [`SHOW`](show-vars.html) -- [`SHOW COLUMNS`](show-columns.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) -- [`SHOW CREATE`](show-create.html) -- [`SHOW DATABASES`](show-databases.html) -- [`SHOW GRANTS`](show-grants.html) -- [`SHOW INDEX`](show-index.html) -- [`SHOW SCHEMAS`](show-schemas.html) -- [`SHOW TABLES`](show-tables.html) -- [SQL Name Resolution](sql-name-resolution.html) -- [System Catalogs](system-catalogs.html) diff --git a/src/current/v22.1/create-and-configure-changefeeds.md b/src/current/v22.1/create-and-configure-changefeeds.md deleted file mode 100644 index 6523c5d94a6..00000000000 --- a/src/current/v22.1/create-and-configure-changefeeds.md +++ /dev/null @@ -1,142 +0,0 @@ ---- -title: Create and Configure Changefeeds -summary: Create and configure a changefeed job for Core and Enterprise. -toc: true -docs_area: stream_data ---- - -Core and {{ site.data.products.enterprise }} changefeeds offer different levels of configurability. {{ site.data.products.enterprise }} changefeeds allow for active changefeed jobs to be [paused](#pause), [resumed](#resume), and [canceled](#cancel). - -Both Core and {{ site.data.products.enterprise }} changefeeds require that you enable rangefeeds before creating a changefeed. See the [Enable rangefeeds](#enable-rangefeeds) section for further detail. - -## Considerations - -- It is necessary to [enable rangefeeds](#enable-rangefeeds) for changefeeds to work. -- If you require [`resolved`](create-changefeed.html#resolved-option) message frequency under `30s`, then you **must** set the [`min_checkpoint_frequency`](create-changefeed.html#min-checkpoint-frequency) option to at least the desired `resolved` frequency. -- Many DDL queries (including [`TRUNCATE`](truncate.html), [`DROP TABLE`](drop-table.html), and queries that add a column family) will cause errors on a changefeed watching the affected tables. You will need to [start a new changefeed](create-changefeed.html#start-a-new-changefeed-where-another-ended). -- Partial or intermittent sink unavailability may impact changefeed stability. If a sink is unavailable, messages can't send, which means that a changefeed's high-water mark timestamp is at risk of falling behind the cluster's [garbage collection window](configure-replication-zones.html#replication-zone-variables). Throughput and latency can be affected once the sink is available again. However, [ordering guarantees](changefeed-messages.html#ordering-guarantees) will still hold for as long as a changefeed [remains active](monitor-and-debug-changefeeds.html#monitor-a-changefeed). -- When an [`IMPORT INTO`](import-into.html) statement is run, any current changefeed jobs targeting that table will fail. -- {% include {{ page.version.version }}/cdc/virtual-computed-column-cdc.md %} - -When creating a changefeed, it's important to consider the number of changefeeds versus the number of tables to include in a single changefeed: - -- Changefeeds each have their own memory overhead, so every running changefeed will increase total memory usage. -- Creating a single changefeed that will watch hundreds of tables can affect the performance of a changefeed by introducing coupling, where the performance of a watched table affects the performance of the changefeed watching it. For example, any [schema change](changefeed-messages.html#schema-changes) on any of the tables will affect the entire changefeed's performance. - -To watch multiple tables, we recommend creating a changefeed with a comma-separated list of tables. However, we do **not** recommend creating a single changefeed for watching hundreds of tables. - -We suggest monitoring the performance of your changefeeds. See [Monitor and Debug Changefeeds](monitor-and-debug-changefeeds.html) for more detail. - -## Enable rangefeeds - -Changefeeds connect to a long-lived request (i.e., a rangefeed), which pushes changes as they happen. This reduces the latency of row changes, as well as reduces transaction restarts on tables being watched by a changefeed for some workloads. - -**Rangefeeds must be enabled for a changefeed to work.** To [enable the cluster setting](set-cluster-setting.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING kv.rangefeed.enabled = true; -~~~ - -{% include {{ page.version.version }}/cdc/cdc-cloud-rangefeed.md %} - -Any created changefeeds will error until this setting is enabled. Note that enabling rangefeeds currently has a small performance cost (about a 5-10% increase in latencies), whether or not the rangefeed is being used in a changefeed. - -The `kv.closed_timestamp.target_duration` [cluster setting](cluster-settings.html) can be used with changefeeds. Resolved timestamps will always be behind by at least the duration configured by this setting. However, decreasing the duration leads to more transaction restarts in your cluster, which can affect performance. - -The following Enterprise and Core sections outline how to create and configure each type of changefeed: - -
    - - -
    - -
    - -## Configure a changefeed - -An {{ site.data.products.enterprise }} changefeed streams row-level changes in a configurable format to a configurable sink (i.e., Kafka or a cloud storage sink). You can [create](#create), [pause](#pause), [resume](#resume), and [cancel](#cancel) an {{ site.data.products.enterprise }} changefeed. For a step-by-step example connecting to a specific sink, see the [Changefeed Examples](changefeed-examples.html) page. - -### Create - -To create an {{ site.data.products.enterprise }} changefeed: - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE CHANGEFEED FOR TABLE table_name, table_name2 INTO '{scheme}://{host}:{port}?{query_parameters}'; -~~~ - -{% include {{ page.version.version }}/cdc/url-encoding.md %} - -When you create a changefeed **without** specifying a sink, CockroachDB sends the changefeed events to the SQL client. Consider the following regarding the [display format](cockroach-sql.html#sql-flag-format) in your SQL client: - -- If you do not define a display format, the client will buffer forever waiting for the query to finish because the default format needs to know the maximum row length. -- If you create a changefeed without a sink but specify a display format (e.g., `--format=csv`), it will run as a [core-style changefeed](changefeed-for.html) sending messages to the SQL client. - -For more information, see [`CREATE CHANGEFEED`](create-changefeed.html). - -### Pause - -To pause an {{ site.data.products.enterprise }} changefeed: - -{% include_cached copy-clipboard.html %} -~~~ sql -PAUSE JOB job_id; -~~~ - -For more information, see [`PAUSE JOB`](pause-job.html). - -### Resume - -To resume a paused {{ site.data.products.enterprise }} changefeed: - -{% include_cached copy-clipboard.html %} -~~~ sql -RESUME JOB job_id; -~~~ - -For more information, see [`RESUME JOB`](resume-job.html). - -### Cancel - -To cancel an {{ site.data.products.enterprise }} changefeed: - -{% include_cached copy-clipboard.html %} -~~~ sql -CANCEL JOB job_id; -~~~ - -For more information, see [`CANCEL JOB`](cancel-job.html). - -### Modify a changefeed - -{% include {{ page.version.version }}/cdc/modify-changefeed.md %} - -### Configuring all changefeeds - -{% include {{ page.version.version }}/cdc/configure-all-changefeed.md %} - -
    - -
    - -## Create a changefeed - -A core changefeed streams row-level changes to the client indefinitely until the underlying connection is closed or the changefeed is canceled. - -To create a core changefeed: - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPERIMENTAL CHANGEFEED FOR table_name; -~~~ - -For more information, see [`EXPERIMENTAL CHANGEFEED FOR`](changefeed-for.html). - -
    - -## See also - -- [`SHOW JOBS`](show-jobs.html) -- [`EXPERIMENTAL CHANGEFEED FOR`](changefeed-for.html) -- [`CREATE CHANGEFEED`](create-changefeed.html) diff --git a/src/current/v22.1/create-changefeed.md b/src/current/v22.1/create-changefeed.md deleted file mode 100644 index 93e80b2b100..00000000000 --- a/src/current/v22.1/create-changefeed.md +++ /dev/null @@ -1,403 +0,0 @@ ---- -title: CREATE CHANGEFEED -summary: The CREATE CHANGEFEED statement creates a changefeed of row-level change subscriptions in a configurable format to a configurable sink. -toc: true -docs_area: reference.sql ---- - -{{site.data.alerts.callout_info}} -`CREATE CHANGEFEED` is an [{{ site.data.products.enterprise }}-only](enterprise-licensing.html) feature. For the core version, see [`EXPERIMENTAL CHANGEFEED FOR`](changefeed-for.html). -{{site.data.alerts.end}} - -The `CREATE CHANGEFEED` [statement](sql-statements.html) creates a new {{ site.data.products.enterprise }} changefeed, which targets an allowlist of tables called "watched rows". Every change to a watched row is emitted as a record in a configurable format (`JSON` or Avro) to a configurable sink ([Kafka](https://kafka.apache.org/), [Google Cloud Pub/Sub](https://cloud.google.com/pubsub), a [cloud storage sink](changefeed-sinks.html#cloud-storage-sink), or a [webhook sink](changefeed-sinks.html#webhook-sink)). You can [create](#create-a-changefeed-connected-to-kafka), [pause](#pause-a-changefeed), [resume](#resume-a-paused-changefeed), [alter](alter-changefeed.html), or [cancel](#cancel-a-changefeed) an {{ site.data.products.enterprise }} changefeed. - -We recommend reading the [Changefeed Messages](changefeed-messages.html) page for detail on understanding how changefeeds emit messages and [Create and Configure Changefeeds](create-and-configure-changefeeds.html) for important usage considerations. - -## Required privileges - -To create a changefeed, the user must be a member of the `admin` role or have the [`CREATECHANGEFEED`](create-user.html#create-a-user-that-can-control-changefeeds) parameter set. - -## Synopsis - -
    -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/create_changefeed.html %} -
    - -## Parameters - -Parameter | Description -----------|------------ -`table_name` | The name of the table (or tables in a comma separated list) to create a changefeed for.

    **Note:** Before creating a changefeed, consider the number of changefeeds versus the number of tables to include in a single changefeed. Each scenario can have an impact on total memory usage or changefeed performance. Refer to [Create and Configure Changefeeds](create-and-configure-changefeeds.html) for more detail. -`sink` | The location of the configurable sink. The scheme of the URI indicates the type. For more information, see [Sink URI](#sink-uri).

    **Note:** If you create a changefeed without a sink, your changefeed will run as a [core-style changefeed](changefeed-for.html) sending messages to the SQL client. For more detail, refer to the [Create and Configure Changefeeds](create-and-configure-changefeeds.html#create) page. -`option` / `value` | For a list of available options and their values, refer to [Options](#options). - -### Sink URI - -The sink URI follows the basic format of: - -~~~ -'{scheme}://{host}:{port}?{query_parameters}' -~~~ - -URI Component | Description --------------------+------------------------------------------------------------------ -`scheme` | The type of sink: [`kafka`](#kafka), [`gcpubsub`](#google-cloud-pub-sub), any [cloud storage sink](#cloud-storage), or [webhook sink](#webhook). -`host` | The sink's hostname or IP address. -`port` | The sink's port. -`query_parameters` | The sink's [query parameters](#query-parameters). - -{{site.data.alerts.callout_info}} -See [Changefeed Sinks](changefeed-sinks.html) for considerations when using each sink and detail on configuration. -{{site.data.alerts.end}} - -#### Kafka - -Example of a Kafka sink URI: - -~~~ -'kafka://broker.address.com:9092?topic_prefix=bar_&tls_enabled=true&ca_cert=LS0tLS1CRUdJTiBDRVJUSUZ&sasl_enabled=true&sasl_user={sasl user}&sasl_password={url-encoded password}&sasl_mechanism=SCRAM-SHA-256' -~~~ - -#### Google Cloud Pub/Sub - -{{site.data.alerts.callout_info}} -The Google Cloud Pub/Sub sink is currently in **beta**. -{{site.data.alerts.end}} - -{% include_cached new-in.html version="v22.1" %} Example of a Google Cloud Pub/Sub sink URI: - -~~~ -'gcpubsub://{project name}?region={region}&topic_name={topic name}&AUTH=specified&CREDENTIALS={base64-encoded key}' -~~~ - -[Use Cloud Storage for Bulk Operations](use-cloud-storage-for-bulk-operations.html?filters=gcs#authentication) explains the requirements for the authentication parameter with `specified` or `implicit`. See [Changefeed Sinks](changefeed-sinks.html#google-cloud-pub-sub) for further consideration. - -#### Cloud Storage - -The following are example file URLs for each of the cloud storage schemes: - -Location | Example --------------+---------------------------------------------------------------------------------- -Amazon S3 | `'s3://{BUCKET NAME}/{PATH}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}'` -Azure Blob Storage | `'azure://{CONTAINER NAME}/{PATH}?AZURE_ACCOUNT_NAME={ACCOUNT NAME}&AZURE_ACCOUNT_KEY={URL-ENCODED KEY}'` -Google Cloud | `'gs://{BUCKET NAME}/{PATH}?AUTH=specified&CREDENTIALS={ENCODED KEY'` -HTTP | `'http://localhost:8080/{PATH}'` - -[Use Cloud Storage for Bulk Operations](use-cloud-storage-for-bulk-operations.html) explains the requirements for authentication and encryption for each supported cloud storage sink. See [Changefeed Sinks](changefeed-sinks.html#cloud-storage-sink) for considerations when using cloud storage. - -#### Webhook - -{{site.data.alerts.callout_info}} -The webhook sink is currently in **beta**. -{{site.data.alerts.end}} - -Example of a webhook URI: - -~~~ -'webhook-https://{your-webhook-endpoint}?insecure_tls_skip_verify=true' -~~~ - -See [Changefeed Sinks](changefeed-sinks.html#webhook-sink) for specifics on webhook sink configuration. - -### Query parameters - -{% include {{ page.version.version }}/cdc/url-encoding.md %} - -Query parameters include: - -Parameter |
    Sink Type
    |
    Type
    | Description --------------------+-----------------------------------------------+-------------------------------------+------------------------------------------------------------ -`ca_cert` | [Kafka](changefeed-sinks.html#kafka), [webhook](changefeed-sinks.html#webhook-sink), ([Confluent schema registry](https://docs.confluent.io/platform/current/schema-registry/index.html)) | [`STRING`](string.html) | The base64-encoded `ca_cert` file. Specify `ca_cert` for a Kafka sink, webhook sink, and/or a Confluent schema registry.

    For usage with a Kafka sink, see [Kafka Sink URI](changefeed-sinks.html#kafka).

    It's necessary to state `https` in the schema registry's address when passing `ca_cert`:
    `confluent_schema_registry='https://schema_registry:8081?ca_cert=LS0tLS1CRUdJTiBDRVJUSUZ'`
    See [`confluent_schema_registry`](#confluent-registry) for more detail on using this option.

    Note: To encode your `ca.cert`, run `base64 -w 0 ca.cert`. -`client_cert` | [Kafka](changefeed-sinks.html#kafka), [webhook](changefeed-sinks.html#webhook-sink) | [`STRING`](string.html) | The base64-encoded Privacy Enhanced Mail (PEM) certificate. This is used with `client_key`. -`client_key` | [Kafka](changefeed-sinks.html#kafka), [webhook](changefeed-sinks.html#webhook-sink) | [`STRING`](string.html) | The base64-encoded private key for the PEM certificate. This is used with `client_cert`.

    {% include {{ page.version.version }}/cdc/client-key-encryption.md %} -`file_size` | [cloud](changefeed-sinks.html#cloud-storage-sink) | [`STRING`](string.html) | The file will be flushed (i.e., written to the sink) when it exceeds the specified file size. This can be used with the [`WITH resolved` option](#options), which flushes on a specified cadence.

    **Default:** `16MB` -`insecure_tls_skip_verify` | [Kafka](changefeed-sinks.html#kafka), [webhook](changefeed-sinks.html#webhook-sink) | [`BOOL`](bool.html) | If `true`, disable client-side validation of responses. Note that a CA certificate is still required; this parameter means that the client will not verify the certificate. **Warning:** Use this query parameter with caution, as it creates [MITM](https://en.wikipedia.org/wiki/Man-in-the-middle_attack) vulnerabilities unless combined with another method of authentication.

    **Default:** `false` -`partition_format` | [cloud](changefeed-sinks.html#cloud-storage-sink) | [`STRING`](string.html) | **New in v22.1:** Specify how changefeed [file paths](#general-file-format) are partitioned in cloud storage sinks. Use `partition_format` with the following values:

    • `daily` is the default behavior that organizes directories by dates (`2022-05-18/`, `2022-05-19/`, etc.).
    • `hourly` will further organize directories by hour within each date directory (`2022-05-18/06`, `2022-05-18/07`, etc.).
    • `flat` will not partition the files at all.

    For example: `CREATE CHANGEFEED FOR TABLE users INTO 'gs://...?AUTH...&partition_format=hourly'`

    **Default:** `daily` -`S3_storage_class` | [Amazon S3 cloud storage sink](changefeed-sinks.html#amazon-s3) | [`STRING`](string.html) | Specify the Amazon S3 storage class for files created by the changefeed. See [Create a changefeed with an S3 storage class](#create-a-changefeed-with-an-s3-storage-class) for the available classes and an example.

    **Default:** `STANDARD` -`sasl_enabled` | [Kafka](changefeed-sinks.html#kafka) | [`BOOL`](bool.html) | If `true`, the authentication protocol can be set to SCRAM or PLAIN using the `sasl_mechanism` parameter. You must have `tls_enabled` set to `true` to use SASL.

    **Default:** `false` -`sasl_mechanism` | [Kafka](changefeed-sinks.html#kafka) | [`STRING`](string.html) | Can be set to [`SCRAM-SHA-256`](https://docs.confluent.io/platform/current/kafka/authentication_sasl/authentication_sasl_scram.html), [`SCRAM-SHA-512`](https://docs.confluent.io/platform/current/kafka/authentication_sasl/authentication_sasl_scram.html), or [`PLAIN`](https://docs.confluent.io/current/kafka/authentication_sasl/authentication_sasl_plain.html). A `sasl_user` and `sasl_password` are required.

    **Default:** `PLAIN` -`sasl_password` | [Kafka](changefeed-sinks.html#kafka) | [`STRING`](string.html) | Your SASL password. **Note:** Passwords should be [URL encoded](https://en.wikipedia.org/wiki/Percent-encoding) since the value can contain characters that would cause authentication to fail. -`sasl_user` | [Kafka](changefeed-sinks.html#kafka) | [`STRING`](string.html) | Your SASL username. -`topic_name` | [Kafka](changefeed-sinks.html#kafka), [GC Pub/Sub](changefeed-sinks.html#google-cloud-pub-sub) | [`STRING`](string.html) | Allows arbitrary topic naming for Kafka and GC Pub/Sub topics. See the [Kafka topic naming limitations](changefeed-sinks.html#topic-naming) or [GC Pub/Sub topic naming](changefeed-sinks.html#pub-sub-topic-naming) for detail on supported characters etc.

    For example, `CREATE CHANGEFEED FOR foo,bar INTO 'kafka://sink?topic_name=all'` will emit all records to a topic named `all`. Note that schemas will still be registered separately. When using Kafka, this option can be combined with the [`topic_prefix` option](#topic-prefix-param) (this is not supported for GC Pub/Sub).

    **Default:** table name. -`topic_prefix` | [Kafka](changefeed-sinks.html#kafka), [cloud](changefeed-sinks.html#cloud-storage-sink) | [`STRING`](string.html) | Adds a prefix to all topic names.

    For example, `CREATE CHANGEFEED FOR TABLE foo INTO 'kafka://...?topic_prefix=bar_'` would emit rows under the topic `bar_foo` instead of `foo`. -`tls_enabled` | [Kafka](changefeed-sinks.html#kafka) | [`BOOL`](bool.html) | If `true`, enable Transport Layer Security (TLS) on the connection to Kafka. This can be used with a `ca_cert` (see below).

    **Default:** `false` - -### Options - -Option | Value | Description --------|-------|------------ -`avro_schema_prefix` | Schema prefix name | Provide a namespace for the schema of a table in addition to the default, the table name. This allows multiple databases or clusters to share the same schema registry when the same table name is present in multiple databases.

    Example: `CREATE CHANGEFEED FOR foo WITH format=avro, confluent_schema_registry='registry_url', avro_schema_prefix='super'` will register subjects as `superfoo-key` and `superfoo-value` with the namespace `super`. -`compression` | `gzip` | Compress changefeed data files written to a [cloud storage sink](changefeed-sinks.html#cloud-storage-sink). Currently, only [Gzip](https://www.gnu.org/software/gzip/) is supported for compression. -`confluent_schema_registry` | Schema Registry address | The [Schema Registry](https://docs.confluent.io/current/schema-registry/docs/index.html#sr) address is required to use `avro`.

    {% include {{ page.version.version }}/cdc/schema-registry-timeout.md %}

    {% include {{ page.version.version }}/cdc/confluent-cloud-sr-url.md %} -`cursor` | [Timestamp](as-of-system-time.html#parameters) | Emit any changes after the given timestamp, but does not output the current state of the table first. If `cursor` is not specified, the changefeed starts by doing an initial scan of all the watched rows and emits the current value, then moves to emitting any changes that happen after the scan.

    When starting a changefeed at a specific `cursor`, the `cursor` cannot be before the configured garbage collection window (see [`gc.ttlseconds`](configure-replication-zones.html#replication-zone-variables)) for the table you're trying to follow; otherwise, the changefeed will error. With default garbage collection settings, this means you cannot create a changefeed that starts more than 25 hours in the past.

    `cursor` can be used to [start a new changefeed where a previous changefeed ended.](#start-a-new-changefeed-where-another-ended)

    Example: `CURSOR='1536242855577149065.0000000000'` -`diff` | N/A | Publish a `before` field with each message, which includes the value of the row before the update was applied. -`end_time` | [Timestamp](as-of-system-time.html#parameters) | New in v22.1: Indicate the timestamp up to which the changefeed will emit all events and then complete with a `successful` status. Provide a future timestamp to `end_time` in number of nanoseconds since the [Unix epoch](https://en.wikipedia.org/wiki/Unix_time). For example, `end_time="1655402400000000000"`. You cannot use `end_time` and [`initial_scan = 'only'`](#initial-scan) simultaneously. -`envelope` | `key_only` / `row`* / `wrapped` | `key_only` emits only the key and no value, which is faster if you only want to know when the key changes.

    `row` emits the row without any additional metadata fields in the message. *You can only use `row` with Kafka sinks or sinkless changefeeds. `row` does not support [`avro` format](#format).

    `wrapped` emits the full message including any metadata fields. See [Responses](changefeed-messages.html#responses) for more detail on message format.

    Default: `envelope=wrapped` -`format` | `json` / `avro` / `csv`* | Format of the emitted record. For mappings of CockroachDB types to Avro types, [see the table](changefeed-messages.html#avro-types) and detail on [Avro limitations](changefeed-messages.html#avro-limitations).

    New in v22.1: *`format=csv` works only in combination with [`initial_scan = 'only'`](#initial-scan). You cannot combine `format=csv` with the [`diff`](#diff-opt) or [`resolved`](#resolved-option) options. See [Export data with changefeeds](export-data-with-changefeeds.html) for details using these options to create a changefeed as an alternative to [`EXPORT`](export.html).

    Default: `format=json`. -`full_table_name` | N/A | Use fully qualified table name in topics, subjects, schemas, and record output instead of the default table name. This can prevent unintended behavior when the same table name is present in multiple databases.

    **Note:** This option cannot modify existing table names used as topics, subjects, etc., as part of an [`ALTER CHANGEFEED`](alter-changefeed.html) statement. To modify a topic, subject, etc., to use a fully qualified table name, create a new changefeed with this option.

    Example: `CREATE CHANGEFEED FOR foo... WITH full_table_name` will create the topic name `defaultdb.public.foo` instead of `foo`. -`initial_scan` | `yes`/`no`/`only` | Control whether or not an initial scan will occur at the start time of a changefeed. Only one `initial_scan` option (`yes`, `no`, or `only`) can be used. If none of these are set, an initial scan will occur if there is no [`cursor`](#cursor-option), and will not occur if there is one. This preserves the behavior from previous releases. With `initial_scan = 'only'` set, the changefeed job will end with a successful status (`succeeded`) after the initial scan completes. You cannot specify `yes`, `no`, `only` simultaneously.

    If used in conjunction with `cursor`, an initial scan will be performed at the cursor timestamp. If no `cursor` is specified, the initial scan is performed at `now()`.

    Although the [`initial_scan` / `no_initial_scan`](../v21.2/create-changefeed.html#initial-scan) syntax from previous versions is still supported, you cannot combine the previous and current syntax.

    **Note**: You cannot use the new `initial_scan = "yes"/"no"/"only"` syntax with [`ALTER CHANGEFEED`](alter-changefeed.html) in v22.1. To ensure that you can modify a changefeed with the `initial_scan` options, use the previous syntax of `initial_scan`, `no_initial_scan`, and `initial_scan_only`.

    Default: `initial_scan = 'yes'` -`kafka_sink_config` | [`STRING`](string.html) | Set fields to configure the required level of message acknowledgement from the Kafka server, the version of the server, and batching parameters for Kafka sinks. New in v22.1.12: Set the message file compression type. See [Kafka sink configuration](changefeed-sinks.html#kafka-sink-configuration) for more detail on configuring all the available fields for this option.

    Example: `CREATE CHANGEFEED FOR table INTO 'kafka://localhost:9092' WITH kafka_sink_config='{"Flush": {"MaxMessages": 1, "Frequency": "1s"}, "RequiredAcks": "ONE"}'` -`key_in_value` | N/A | Make the [primary key](primary-key.html) of a deleted row recoverable in sinks where each message has a value but not a key (most have a key and value in each message). `key_in_value` is automatically used for [cloud storage sinks](changefeed-sinks.html#cloud-storage-sink), [webhook sinks](changefeed-sinks.html#webhook-sink), and [GC Pub/Sub sinks](changefeed-sinks.html#google-cloud-pub-sub). -`metrics_label` | [`STRING`](string.html) | This is an **experimental** feature. Define a metrics label to which the metrics for one or multiple changefeeds increment. All changefeeds also have their metrics aggregated.

    The maximum length of a label is 128 bytes. There is a limit of 1024 unique labels.

    `WITH metrics_label=label_name`

    For more detail on usage and considerations, see [Using changefeed metrics labels](monitor-and-debug-changefeeds.html#using-changefeed-metrics-labels). -`min_checkpoint_frequency` | [Duration string](https://pkg.go.dev/time#ParseDuration) | Controls how often nodes flush their progress to the [coordinating changefeed node](change-data-capture-overview.html#how-does-an-enterprise-changefeed-work). Changefeeds will wait for at least the specified duration before a flush to the sink. This can help you control the flush frequency of higher latency sinks to achieve better throughput. If this is set to `0s`, a node will flush as long as the high-water mark has increased for the ranges that particular node is processing. If a changefeed is resumed, then `min_checkpoint_frequency` is the amount of time that changefeed will need to catch up. That is, it could emit duplicate messages during this time.

    **Note:** [`resolved`](#resolved-option) messages will not be emitted more frequently than the configured `min_checkpoint_frequency` (but may be emitted less frequently). Since `min_checkpoint_frequency` defaults to `30s`, you **must** configure `min_checkpoint_frequency` to at least the desired `resolved` message frequency if you require `resolved` messages more frequently than `30s`.

    **Default:** `30s` -`mvcc_timestamp` | N/A | Include the [MVCC](architecture/storage-layer.html#mvcc) timestamp for each emitted row in a changefeed. With the `mvcc_timestamp` option, each emitted row will always contain its MVCC timestamp, even during the changefeed's initial backfill. -`on_error` | `pause` / `fail` | Use `on_error=pause` to pause the changefeed when encountering **non**-retryable errors. `on_error=pause` will pause the changefeed instead of sending it into a terminal failure state. **Note:** Retryable errors will continue to be retried with this option specified.

    Use with [`protect_data_from_gc_on_pause`](#protect-pause) to protect changes from [garbage collection](configure-replication-zones.html#gc-ttlseconds).

    Default: `on_error=fail` -`protect_data_from_gc_on_pause` | N/A | When a [changefeed is paused](pause-job.html), ensure that the data needed to [resume the changefeed](resume-job.html) is not garbage collected. If `protect_data_from_gc_on_pause` is **unset**, pausing the changefeed will release the existing protected timestamp records. It is also important to note that pausing and adding `protect_data_from_gc_on_pause` to a changefeed will not protect data if the [garbage collection](configure-replication-zones.html#gc-ttlseconds) window has already passed.

    Use with [`on_error=pause`](#on-error) to protect changes from garbage collection when encountering non-retryable errors.

    See [Garbage collection and changefeeds](changefeed-messages.html#garbage-collection-and-changefeeds) for more detail on protecting changefeed data.

    **Note:** If you use this option, changefeeds that are left paused for long periods of time can prevent garbage collection. -`resolved` | [Duration string](https://pkg.go.dev/time#ParseDuration) | Emits [resolved timestamp](changefeed-messages.html#resolved-def) events per changefeed in a format dependent on the connected sink. Resolved timestamp events do not emit until all ranges in the changefeed have progressed to a specific point in time.

    Set an optional minimal duration between emitting resolved timestamps. Example: `resolved='10s'`. This option will **only** emit a resolved timestamp event if the timestamp has advanced and at least the optional duration has elapsed. If unspecified, all resolved timestamps are emitted as the high-water mark advances.

    **Note:** If you require `resolved` message frequency under `30s`, then you **must** set the [`min_checkpoint_frequency`](#min-checkpoint-frequency) option to at least the desired `resolved` frequency. This is because `resolved` messages will not be emitted more frequently than `min_checkpoint_frequency`, but may be emitted less frequently. -`schema_change_events` | `default` / `column_changes` | The type of schema change event that triggers the behavior specified by the `schema_change_policy` option:
    • `default`: Include all [`ADD COLUMN`](add-column.html) events for columns that have a non-`NULL` [`DEFAULT` value](default-value.html) or are [computed](computed-columns.html), and all [`DROP COLUMN`](drop-column.html) events.
    • `column_changes`: Include all schema change events that add or remove any column.

    Default: `schema_change_events=default` -`schema_change_policy` | `backfill` / `nobackfill` / `stop` | The behavior to take when an event specified by the `schema_change_events` option occurs:
    • `backfill`: When [schema changes with column backfill](changefeed-messages.html#schema-changes-with-column-backfill) are finished, output all watched rows using the new schema.
    • `nobackfill`: For [schema changes with column backfill](changefeed-messages.html#schema-changes-with-column-backfill), perform no logical backfills. The changefeed will still emit any duplicate records for the table being altered, but will not emit the new schema records.
    • `stop`: [schema changes with column backfill](changefeed-messages.html#schema-changes-with-column-backfill), wait for all data preceding the schema change to be resolved before exiting with an error indicating the timestamp at which the schema change occurred. An `error: schema change occurred at ` will display in the `cockroach.log` file.

    Default: `schema_change_policy=backfill` -`split_column_families` | N/A | Use this option to create a changefeed on a table with multiple [column families](column-families.html). The changefeed will emit messages for each of the table's column families. See [Changefeeds on tables with column families](changefeeds-on-tables-with-column-families.html) for more usage detail. -`topic_in_value` | [`BOOL`](bool.html) | Set to include the topic in each emitted row update. Note this is automatically set for [webhook sinks](changefeed-sinks.html#webhook-sink). -`updated` | N/A | Include updated timestamps with each row.

    If a `cursor` is provided, the "updated" timestamps will match the [MVCC](architecture/storage-layer.html#mvcc) timestamps of the emitted rows, and there is no initial scan. If a `cursor` is not provided, the changefeed will perform an initial scan (as of the time the changefeed was created), and the "updated" timestamp for each change record emitted in the initial scan will be the timestamp of the initial scan. Similarly, when a [backfill is performed for a schema change](changefeed-messages.html#schema-changes-with-column-backfill), the "updated" timestamp is set to the first timestamp for when the new schema is valid. -`virtual_columns` | `STRING` | **New in v22.1:** Changefeeds omit [virtual computed columns](computed-columns.html) from emitted [messages](changefeed-messages.html) by default. To maintain the behavior of previous CockroachDB versions where the changefeed would emit [`NULL`](null-handling.html) values for virtual computed columns, set `virtual_columns = "null"` when you start a changefeed.

    You may also define `virtual_columns = "omitted"`, though this is already the default behavior for v22.1+. If you do not set `"omitted"` on a table with virtual computed columns when you create a changefeed, you will receive a warning that changefeeds will filter out virtual computed values.

    **Default:** `"omitted"` -`webhook_auth_header` | [`STRING`](string.html) | Pass a value (password, token etc.) to the HTTP [Authorization header](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Authorization) with a webhook request for a "Basic" HTTP authentication scheme.

    Example: With a username of "user" and password of "pwd", add a colon between "user:pwd" and then base64 encode, which results in "dXNlcjpwd2Q=". `WITH webhook_auth_header='Basic dXNlcjpwd2Q='`. -`webhook_client_timeout` | [`INTERVAL`](interval.html) | If a response is not recorded from the sink within this timeframe, it will error and retry to connect. Note this must be a positive value.

    **Default:** `"3s"` -`webhook_sink_config` | [`STRING`](string.html) | Set fields to configure sink batching and retries. The schema is as follows:

    `{ "Flush": { "Messages": ..., "Bytes": ..., "Frequency": ..., }, "Retry": {"Max": ..., "Backoff": ..., } }`.

    **Note** that if either `Messages` or `Bytes` are nonzero, then a non-zero value for `Frequency` must be provided.

    See [Webhook sink configuration](changefeed-sinks.html#webhook-sink-configuration) for more details on using this option. - -{{site.data.alerts.callout_info}} - Using the `format=avro`, `envelope=key_only`, and `updated` options together is rejected. `envelope=key_only` prevents any rows with updated fields from being emitted, which makes the `updated` option meaningless. -{{site.data.alerts.end}} - -## Files - -The files emitted to a sink use the following naming conventions: - -- [General file format](#general-file-format) -- [Resolved file format](#resolved-file-format) - -{{site.data.alerts.callout_info}} -The timestamp format is `YYYYMMDDHHMMSSNNNNNNNNNLLLLLLLLLL`. -{{site.data.alerts.end}} - -### General file format - -~~~ -/[date]/[timestamp]-[uniquer]-[topic]-[schema-id] -~~~ - -For example: - -~~~ -/2020-04-02/202004022058072107140000000000000-56087568dba1e6b8-1-72-00000000-test_table-1.ndjson -~~~ - -{% include_cached new-in.html version="v22.1" %} When emitting changefeed messages to a [cloud storage sink](changefeed-sinks.html#cloud-storage-sink), you can specify a partition format for your files using the [`partition_format`](#partition-format) query parameter. This will result in the following file path formats: - -- `daily`: This is the default option and will follow the same pattern as the previous general file format. -- `hourly`: This will partition into an hourly directory as the changefeed emits messages, like the following: - - ~~~ - /2020-04-02/20/202004022058072107140000000000000-56087568dba1e6b8-1-72-00000000-test_table-1.ndjson - ~~~ - -- `flat`: This will result in no file partitioning. The cloud storage path you specify when creating a changefeed will store all of the message files with no additional directories created. - -### Resolved file format - -~~~ -/[date]/[timestamp].RESOLVED -~~~ - -For example: - -~~~ -/2020-04-04/202004042351304139680000000000000.RESOLVED -~~~ - -## Examples - -Before running any of the examples in this section it is necessary to enable the `kv.rangefeed.enabled` cluster setting. If you are working on a CockroachDB {{ site.data.products.serverless }} cluster, this cluster setting is enabled by default. - -The following examples show the syntax for managing changefeeds and starting changefeeds to specific sinks. The [Options](#options) table on this page provides a list of all the available options. For information on sink-specific query parameters and configurations see the [Changefeed Sinks](changefeed-sinks.html) page. - -### Create a changefeed connected to Kafka - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE CHANGEFEED FOR TABLE name, name2, name3 - INTO 'kafka://host:port' - WITH updated, resolved; -~~~ -~~~ -+--------------------+ -| job_id | -+--------------------+ -| 360645287206223873 | -+--------------------+ -(1 row) -~~~ - -For step-by-step guidance on creating a changefeed connected to Kafka, see the [Create a changefeed connected to Kafka](changefeed-examples.html#create-a-changefeed-connected-to-kafka) example. The parameters table on the [Changefeed Sinks](changefeed-sinks.html#kafka-parameters) page provides a list of all kafka-specific query parameters. - -### Create a changefeed connected to Kafka using Avro - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE CHANGEFEED FOR TABLE name, name2, name3 - INTO 'kafka://host:port' - WITH format = avro, confluent_schema_registry = ; -~~~ -~~~ -+--------------------+ -| job_id | -+--------------------+ -| 360645287206223873 | -+--------------------+ -(1 row) -~~~ - -For more information on how to create a changefeed that emits an [Avro](https://avro.apache.org/docs/1.8.2/spec.html) record, see [this step-by-step example](changefeed-examples.html#create-a-changefeed-connected-to-kafka-using-avro). The parameters table on the [Changefeed Sinks](changefeed-sinks.html#kafka-parameters) page provides a list of all kafka-specific query parameters. - -### Create a changefeed connected to a cloud storage sink - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE CHANGEFEED FOR TABLE name, name2, name3 - INTO 'scheme://host?parameters' - WITH updated, resolved; -~~~ -~~~ -+--------------------+ -| job_id | -+--------------------+ -| 360645287206223873 | -+--------------------+ -(1 row) -~~~ - -For step-by-step guidance on creating a changefeed connected to a cloud storage sink, see the [Changefeed Examples](changefeed-examples.html#create-a-changefeed-connected-to-a-cloud-storage-sink) page. The parameters table on the [Changefeed Sinks](changefeed-sinks.html#cloud-parameters) page provides a list of the available cloud storage parameters. - -### Create a changefeed with an S3 storage class - -{% include_cached new-in.html version="v22.1" %} To associate the changefeed message files with a [specific storage class](use-cloud-storage-for-bulk-operations.html#amazon-s3-storage-classes) in your Amazon S3 bucket, use the `S3_STORAGE_CLASS` parameter with the class. For example, the following S3 connection URI specifies the `INTELLIGENT_TIERING` storage class: - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE CHANGEFEED FOR TABLE name INTO 's3://{BUCKET NAME}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}&S3_STORAGE_CLASS=INTELLIGENT_TIERING' WITH resolved; -~~~ - -{% include {{ page.version.version }}/misc/storage-classes.md %} - -### Create a changefeed connected to a Google Cloud Pub/Sub - -{{site.data.alerts.callout_info}} -The Google Cloud Pub/Sub sink is currently in **beta**. -{{site.data.alerts.end}} - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE CHANGEFEED FOR TABLE name, name2, name3 - INTO 'gcpubsub://project name?parameters' - WITH resolved; -~~~ -~~~ -+--------------------+ -| job_id | -+--------------------+ -| 360645287206223873 | -+--------------------+ -(1 row) -~~~ - -For step-by-step guidance on creating a changefeed connected to a Google Cloud Pub/Sub, see the [Changefeed Examples](changefeed-examples.html#create-a-changefeed-connected-to-a-google-cloud-pub-sub-sink) page. The parameters table on the [Changefeed Sinks](changefeed-sinks.html#pub-sub-parameters) page provides a list of the available Google Cloud Pub/Sub parameters. - -### Create a changefeed connected to a webhook sink - -{% include {{ page.version.version }}/cdc/webhook-beta.md %} - -{% include_cached copy-clipboard.html %} -~~~sql -CREATE CHANGEFEED FOR TABLE name, name2, name3 - INTO 'webhook-https://{your-webhook-endpoint}?insecure_tls_skip_verify=true' - WITH updated; -~~~ - -~~~ -+---------------------+ -| job_id | -----------------------+ -| 687842491801632769 | -+---------------------+ -(1 row) -~~~ - -For step-by-step guidance on creating a changefeed connected to a webhook sink, see the [Changefeed Examples](changefeed-examples.html#create-a-changefeed-connected-to-a-webhook-sink) page. The parameters table on the [Changefeed Sinks](changefeed-sinks.html#webhook-parameters) page provides a list of the available webhook parameters. - -### Manage a changefeed - - For {{ site.data.products.enterprise }} changefeeds, use [`SHOW CHANGEFEED JOBS`](show-jobs.html) to check the status of your changefeed jobs: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CHANGEFEED JOBS; -~~~ - -Use the following SQL statements to pause, resume, or cancel a changefeed. - -#### Pause a changefeed - -{% include_cached copy-clipboard.html %} -~~~ sql -> PAUSE JOB job_id; -~~~ - -For more information, see [`PAUSE JOB`](pause-job.html). - -#### Resume a paused changefeed - -{% include_cached copy-clipboard.html %} -~~~ sql -> RESUME JOB job_id; -~~~ - -For more information, see [`RESUME JOB`](resume-job.html). - -#### Cancel a changefeed - -{% include_cached copy-clipboard.html %} -~~~ sql -> CANCEL JOB job_id; -~~~ - -For more information, see [`CANCEL JOB`](cancel-job.html). - -#### Modify a changefeed - -{% include {{ page.version.version }}/cdc/modify-changefeed.md %} - -#### Configuring all changefeeds - -{% include {{ page.version.version }}/cdc/configure-all-changefeed.md %} - -### Start a new changefeed where another ended - -Find the [high-water timestamp](monitor-and-debug-changefeeds.html#monitor-a-changefeed) for the ended changefeed: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM crdb_internal.jobs WHERE job_id = ; -~~~ -~~~ - job_id | job_type | ... | high_water_timestamp | error | coordinator_id -+--------------------+------------+ ... +--------------------------------+-------+----------------+ - 383870400694353921 | CHANGEFEED | ... | 1537279405671006870.0000000000 | | 1 -(1 row) -~~~ - -Use the `high_water_timestamp` to start the new changefeed: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE CHANGEFEED FOR TABLE name, name2, name3 - INTO 'kafka//host:port' - WITH cursor = ''; -~~~ - -Note that because the cursor is provided, the initial scan is not performed. - -## See also - -- [Change Data Capture Overview](change-data-capture-overview.html) -- [SQL Statements](sql-statements.html) -- [Changefeed Dashboard](ui-cdc-dashboard.html) -- [Monitor and Debug Changefeeds](monitor-and-debug-changefeeds.html) diff --git a/src/current/v22.1/create-database.md b/src/current/v22.1/create-database.md deleted file mode 100644 index 2861da37385..00000000000 --- a/src/current/v22.1/create-database.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -title: CREATE DATABASE -summary: The CREATE DATABASE statement creates a new CockroachDB database. -toc: true -docs_area: reference.sql ---- - -The `CREATE DATABASE` [statement](sql-statements.html) creates a new CockroachDB database. - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Required privileges - -To create a database, the user must be a member of the `admin` role or must have the [`CREATEDB`](create-role.html#create-a-role-that-can-create-and-rename-databases) parameter set. - -## Synopsis - -
    -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/create_database.html %} -
    - -## Parameters - -Parameter | Description -----------|------------ -`IF NOT EXISTS` | Create a new database only if a database of the same name does not already exist; if one does exist, do not return an error. -`name` | The name of the database to create, which [must be unique](#create-fails-name-already-in-use) and follow these [identifier rules](keywords-and-identifiers.html#identifiers). -`encoding` | The `CREATE DATABASE` statement accepts an optional `ENCODING` clause for compatibility with PostgreSQL, but `UTF-8` is the only supported encoding. The aliases `UTF8` and `UNICODE` are also accepted. Values should be enclosed in single quotes and are case-insensitive.

    Example: `CREATE DATABASE bank ENCODING = 'UTF-8'`. -`CONNECTION LIMIT` | Supported for compatibility with PostgreSQL. A value of `-1` indicates no connection limit. Values other than `-1` are currently not supported. By default, `CONNECTION LIMIT = -1`. ([*](#connlimit-note)) -`PRIMARY REGION region_name` | Create a [multi-region database](multiregion-overview.html) with `region_name` as [the primary region](multiregion-overview.html#database-regions).
    Allowed values include any region returned by [`SHOW REGIONS FROM CLUSTER`](show-regions.html). -`REGIONS region_name_list` | Create a [multi-region database](multiregion-overview.html) with `region_name_list` as [database regions](multiregion-overview.html#database-regions).
    Allowed values include any region returned by [`SHOW REGIONS FROM CLUSTER`](show-regions.html).
    To set database regions at database creation, a primary region must be specified in the same `CREATE DATABASE` statement. -`SURVIVE ZONE FAILURE` (*Default*)
    `SURVIVE REGION FAILURE` | Create a [multi-region database](multiregion-overview.html) with regional failure or zone failure [survival goals](multiregion-overview.html#survival-goals).
    To set the regional failure survival goal, the database must have at least 3 [database regions](multiregion-overview.html#database-regions).
    Surviving zone failures is the default setting for multi-region databases. - -* -{% include {{page.version.version}}/sql/server-side-connection-limit.md %} This setting may be useful until the `CONNECTION LIMIT` syntax is fully supported. - -## Example - -### Create a database - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -~~~ -CREATE DATABASE -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW DATABASES; -~~~ - -~~~ - database_name | owner | primary_region | regions | survival_goal -----------------+-------+----------------+---------+---------------- - bank | demo | NULL | {} | NULL - defaultdb | root | NULL | {} | NULL - postgres | root | NULL | {} | NULL - system | node | NULL | {} | NULL -(4 rows) -~~~ - -### Create fails (name already in use) - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -~~~ -ERROR: database "bank" already exists -SQLSTATE: 42P04 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE DATABASE IF NOT EXISTS bank; -~~~ - -~~~ -CREATE DATABASE -~~~ - -SQL does not generate an error, but instead responds `CREATE DATABASE` even though a new database wasn't created. - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW DATABASES; -~~~ - -~~~ - database_name | owner | primary_region | regions | survival_goal -----------------+-------+----------------+---------+---------------- - bank | demo | NULL | {} | NULL - defaultdb | root | NULL | {} | NULL - postgres | root | NULL | {} | NULL - system | node | NULL | {} | NULL -(4 rows) -~~~ - -### Create a multi-region database - -{% include enterprise-feature.md %} - -Suppose you start a cluster with region and zone [localities specified at startup](cockroach-start.html#locality). - -For this example, let's use a [demo cluster](cockroach-demo.html), with the [`--demo-locality` flag](cockroach-demo.html#general) to simulate a multi-region cluster: - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach demo --nodes=6 --demo-locality=region=us-east1,zone=us-east1-a:region=us-east1,zone=us-east1-b:region=us-central1,zone=us-central1-a:region=us-central1,zone=us-central1-b:region=us-west1,zone=us-west1-a:region=us-west1,zone=us-west1-b --no-example-database -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW REGIONS; -~~~ - -~~~ - region | zones | database_names | primary_region_of ---------------+-------------------------------+----------------+-------------------- - us-central1 | {us-central1-a,us-central1-b} | {} | {} - us-east1 | {us-east1-a,us-east1-b} | {} | {} - us-west1 | {us-west1-a,us-west1-b} | {} | {} -(3 rows) -~~~ - -If regions are set at cluster start-up, you can create multi-region databases in the cluster that use the cluster regions. - -Use the following command to specify regions and survival goals at database creation: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank PRIMARY REGION "us-east1" REGIONS "us-east1", "us-central1", "us-west1" SURVIVE REGION FAILURE; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW DATABASES; -~~~ - -~~~ - database_name | owner | primary_region | regions | survival_goal -----------------+-------+----------------+---------------------------------+---------------- - bank | demo | us-east1 | {us-central1,us-east1,us-west1} | region - defaultdb | root | NULL | {} | NULL - postgres | root | NULL | {} | NULL - system | node | NULL | {} | NULL -(4 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW REGIONS FROM DATABASE bank; -~~~ - -~~~ - database | region | primary | zones ------------+-------------+---------+-------------------------------- - bank | us-east1 | true | {us-east1-a,us-east1-b} - bank | us-central1 | false | {us-central1-a,us-central1-b} - bank | us-west1 | false | {us-west1-a,us-west1-b} -(3 rows) -~~~ - -## See also - -- [`SHOW DATABASES`](show-databases.html) -- [`SHOW CREATE DATABASE`](show-create.html) -- [`RENAME DATABASE`](rename-database.html) -- [`SET DATABASE`](set-vars.html) -- [`DROP DATABASE`](drop-database.html) -- [SQL Statements](sql-statements.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v22.1/create-index.md b/src/current/v22.1/create-index.md deleted file mode 100644 index b73f41a6433..00000000000 --- a/src/current/v22.1/create-index.md +++ /dev/null @@ -1,239 +0,0 @@ ---- -title: CREATE INDEX -summary: The CREATE INDEX statement creates an index for a table. Indexes improve your database's performance by helping SQL quickly locate data. -toc: true -keywords: gin, gin index, gin indexes, inverted index, inverted indexes, accelerated index, accelerated indexes -docs_area: reference.sql ---- - -The `CREATE INDEX` [statement](sql-statements.html) creates an index for a table. [Indexes](indexes.html) improve your database's performance by helping SQL locate data without having to look through every row of a table. - -Indexes are automatically created for a table's [`PRIMARY KEY`](primary-key.html) and [`UNIQUE`](unique.html) columns. When querying a table, CockroachDB uses the fastest index. For more information about that process, see [Index Selection in CockroachDB](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/). - -The following types cannot be included in an index key, but can be stored (and used in a covered query) using the [`STORING` or `COVERING`](create-index.html#store-columns) clause: - -- [`JSONB`](jsonb.html) -- [`ARRAY`](array.html) -- The computed [`TUPLE`](scalar-expressions.html#tuple-constructors) type, even if it is constructed from indexed fields - -To create an index on the schemaless data in a [`JSONB`](jsonb.html) column or on the data in an [`ARRAY`](array.html), use a [GIN index](inverted-indexes.html). - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Required privileges - -The user must have the `CREATE` [privilege](security-reference/authorization.html#managing-privileges) on the table. - -## Synopsis - -### Standard index - -
    -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/create_index.html %} -
    - -### GIN index - -
    -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/create_inverted_index.html %} -
    - -## Parameters - -Parameter | Description -----------|------------ -`UNIQUE` | Apply the [`UNIQUE` constraint](unique.html) to the indexed columns.

    This causes the system to check for existing duplicate values on index creation. It also applies the `UNIQUE` constraint at the table level, so the system checks for duplicate values when inserting or updating data. -`INVERTED` | Create a [GIN index](inverted-indexes.html) on the schemaless data in the specified [`JSONB`](jsonb.html) column.

    You can also use the PostgreSQL-compatible syntax `USING GIN`. For more details, see [GIN Indexes](inverted-indexes.html#creation). -`IF NOT EXISTS` | Create a new index only if an index of the same name does not already exist; if one does exist, do not return an error. -`opt_index_name`
    `index_name` | The name of the index to create, which must be unique to its table and follow these [identifier rules](keywords-and-identifiers.html#identifiers).

    If you do not specify a name, CockroachDB uses the format `__key/idx`. `key` indicates the index applies the `UNIQUE` constraint; `idx` indicates it does not. Example: `accounts_balance_idx` -`table_name` | The name of the table you want to create the index on. -`USING name` | An optional clause for compatibility with third-party tools. Accepted values for `name` are `btree`, `gin`, and `gist`, with `btree` for a standard secondary index, `gin` as the PostgreSQL-compatible syntax for a [GIN index](#create-gin-indexes), and `gist` for a [spatial index](spatial-indexes.html). -`name` | The name of the column you want to index. For [multi-region tables](multiregion-overview.html#table-localities), you can use the `crdb_region` column within the index in the event the original index may contain non-unique entries across multiple, unique regions. -`ASC` or `DESC`| Sort the column in ascending (`ASC`) or descending (`DESC`) order in the index. How columns are sorted affects query results, particularly when using `LIMIT`.

    __Default:__ `ASC` -`STORING ...`| Store (but do not sort) each column whose name you include.

    For information on when to use `STORING`, see [Store Columns](#store-columns). Note that columns that are part of a table's [`PRIMARY KEY`](primary-key.html) cannot be specified as `STORING` columns in secondary indexes on the table.

    `COVERING` and `INCLUDE` are aliases for `STORING` and work identically. -`opt_partition_by` | An [Enterprise-only](enterprise-licensing.html) option that lets you [define index partitions at the row level](partitioning.html). As of CockroachDB v21.1 and later, most users should use [`REGIONAL BY ROW` tables](multiregion-overview.html#regional-by-row-tables). Indexes against regional by row tables are automatically partitioned, so explicit index partitioning is not required. -`opt_where_clause` | An optional `WHERE` clause that defines the predicate boolean expression of a [partial index](partial-indexes.html). -`USING HASH` | Creates a [hash-sharded index](hash-sharded-indexes.html). -`WITH storage_parameter` | A comma-separated list of [spatial index tuning parameters](spatial-indexes.html#index-tuning-parameters). Supported parameters include `fillfactor`, `s2_max_level`, `s2_level_mod`, `s2_max_cells`, `geometry_min_x`, `geometry_max_x`, `geometry_min_y`, and `geometry_max_y`. The `fillfactor` parameter is a no-op, allowed for PostgreSQL-compatibility.

    For details, see [Spatial index tuning parameters](spatial-indexes.html#index-tuning-parameters). For an example, see [Create a spatial index that uses all of the tuning parameters](spatial-indexes.html#create-a-spatial-index-that-uses-all-of-the-tuning-parameters). -`CONCURRENTLY` | Optional, no-op syntax for PostgreSQL compatibility. All indexes are created concurrently in CockroachDB. - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -{% include {{page.version.version}}/sql/movr-statements.md %} - -### Create standard indexes - -To create the most efficient indexes, we recommend reviewing: - -- [Indexes: Best Practices](indexes.html#best-practices) -- [Index Selection in CockroachDB](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/) - -#### Single-column indexes - -Single-column indexes sort the values of a single column. - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE INDEX ON users (name); -~~~ - -Because each query can only use one index, single-column indexes are not typically as useful as multiple-column indexes. - -#### Multiple-column indexes - -Multiple-column indexes sort columns in the order you list them. - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE INDEX ON users (name, city); -~~~ - -To create the most useful multiple-column indexes, we recommend reviewing our [best practices](schema-design-indexes.html#best-practices). - -#### Unique indexes - -Unique indexes do not allow duplicate values among their columns. - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE UNIQUE INDEX ON users (name, id); -~~~ - -This also applies the [`UNIQUE` constraint](unique.html) at the table level, similar to [`ALTER TABLE`](alter-table.html). The preceding example is equivalent to: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE users ADD CONSTRAINT users_name_id_key UNIQUE (name, id); -~~~ - -Primary key columns that are not specified within a unique index are automatically marked as [`STORING`](indexes.html#storing-columns) in the [`information_schema.statistics`](information-schema.html#statistics) table and in [`SHOW INDEX`](show-index.html). - -### Create GIN indexes - -You can create [GIN indexes](inverted-indexes.html) on schemaless data in a [`JSONB`](jsonb.html) column. - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE INVERTED INDEX ON promo_codes (rules); -~~~ - -The preceding example is equivalent to the following PostgreSQL-compatible syntax: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE INDEX ON promo_codes USING GIN (rules); -~~~ - -### Create spatial indexes - -You can create [spatial indexes](spatial-indexes.html) on `GEOMETRY` and `GEOGRAPHY` columns. Spatial indexes are a special type of [GIN index](inverted-indexes.html). - -To create a spatial index on a `GEOMETRY` column: - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE INDEX geom_idx_1 ON some_spatial_table USING GIST(geom); -~~~ - -Unlike GIN indexes, spatial indexes do not support an alternate `CREATE INVERTED INDEX ...` syntax. Only the syntax shown here is supported. - -For advanced users, there are a number of [spatial index tuning parameters](spatial-indexes.html#create-a-spatial-index-that-uses-all-of-the-tuning-parameters) that can be passed in using the syntax `WITH (var1=val1, var2=val2)` as follows: - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE INDEX geom_idx_2 - ON some_spatial_table USING GIST(geom) - WITH (s2_max_cells = 20, s2_max_level = 12, s2_level_mod = 3); -~~~ - -{{site.data.alerts.callout_danger}} -Most users should not change the default spatial index settings. There is a risk that you will get worse performance by changing the default settings. For more information , see [Spatial indexes](spatial-indexes.html). -{{site.data.alerts.end}} - -### Store columns - -Storing a column improves the performance of queries that retrieve (but do not filter) its values. - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE INDEX ON users (city) STORING (name); -~~~ - -However, to use stored columns, queries must filter another column in the same index. For example, SQL can retrieve `name` values from the above index only when a query's `WHERE` clause filters `city`. - -{{site.data.alerts.callout_info}} -{% include {{page.version.version}}/sql/covering-index.md %} -{{site.data.alerts.end}} - -### Change column sort order - -To sort columns in descending order, you must explicitly set the option when creating the index. (Ascending order is the default.) - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE INDEX ON users (city DESC, name); -~~~ - -How a column is ordered in the index will affect the ordering of the index keys, and may affect the efficiency of queries that include an `ORDER BY` clause. - -### Query specific indexes - -Normally, CockroachDB selects the index that it calculates will scan the fewest rows. However, you can override that selection and specify the name of the index you want to use. To find the name, use [`SHOW INDEX`](show-index.html). - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW INDEX FROM users; -~~~ - -~~~ - table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit -+------------+---------------------+------------+--------------+-------------+-----------+---------+----------+ - users | users_pkey | false | 1 | city | ASC | false | false - users | users_pkey | false | 2 | id | ASC | false | false - users | users_pkey | false | 3 | name | N/A | true | false - users | users_pkey | false | 4 | address | N/A | true | false - users | users_pkey | false | 5 | credit_card | N/A | true | false - users | users_city_name_idx | true | 1 | city | DESC | false | false - users | users_city_name_idx | true | 2 | name | ASC | false | false - users | users_city_name_idx | true | 3 | id | ASC | false | true -(8 rows) - -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT name FROM users@users_name_idx WHERE city='new york'; -~~~ - -~~~ - name -+------------------+ - Catherine Nelson - Devin Jordan - James Hamilton - Judy White - Robert Murphy -(5 rows) -~~~ - -You can use the `@primary` alias to use the table's primary key in your query if no secondary index explicitly named `primary` exists on that table. - -### Create a hash-sharded secondary index - -{% include {{page.version.version}}/performance/use-hash-sharded-indexes.md %} - -{% include {{page.version.version}}/performance/create-index-hash-sharded-secondary-index.md %} - -## See also - -- [Indexes](indexes.html) -- [`SHOW INDEX`](show-index.html) -- [`DROP INDEX`](drop-index.html) -- [`RENAME INDEX`](rename-index.html) -- [`SHOW JOBS`](show-jobs.html) -- [SQL Statements](sql-statements.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v22.1/create-node-certs.res b/src/current/v22.1/create-node-certs.res deleted file mode 100644 index 8487b36ec93..00000000000 --- a/src/current/v22.1/create-node-certs.res +++ /dev/null @@ -1 +0,0 @@ -create-node-certs.res \ No newline at end of file diff --git a/src/current/v22.1/create-role.md b/src/current/v22.1/create-role.md deleted file mode 100644 index 8604d0a4ea2..00000000000 --- a/src/current/v22.1/create-role.md +++ /dev/null @@ -1,343 +0,0 @@ ---- -title: CREATE ROLE -summary: The CREATE ROLE statement creates SQL roles, which are groups containing any number of roles and users as members. -toc: true -docs_area: reference.sql ---- - -The `CREATE ROLE` [statement](sql-statements.html) creates SQL [roles](security-reference/authorization.html#users-and-roles), which are groups containing any number of roles and users as members. You can assign [privileges](security-reference/authorization.html#privileges) to roles, and all members of the role (regardless of whether if they are direct or indirect members) will inherit the role's privileges. - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -{{site.data.alerts.callout_info}} -The keywords `ROLE` and `USER` can be used interchangeably in SQL statements for enhanced PostgreSQL compatibility. - -`CREATE USER` is equivalent to the statement `CREATE ROLE`, with one exception: `CREATE ROLE` sets the [`NOLOGIN`](#parameters) option by default, preventing the new role from being used to log in to the database. You can use `CREATE ROLE` and specify the [`LOGIN`](#parameters) option to achieve the same result as `CREATE USER`. -{{site.data.alerts.end}} - -## Considerations - -- Role names: - - Are case-insensitive - - Must start with either a letter or underscore - - Must contain only letters, numbers, periods, or underscores - - Must be between 1 and 63 characters. - - Cannot be `none`. - - Cannot start with `pg_` or `crdb_internal`. Object names with these prefixes are reserved for [system catalogs](system-catalogs.html). -- After creating roles, you must [grant them privileges to databases and tables](grant.html). -- Roles and users can be members of roles. -- Roles and users share the same namespace and must be unique. -- All [privileges](security-reference/authorization.html#privileges) of a role are inherited by all of its members. -- Role options of a role are not inherited by any of its members. -- There is no limit to the number of members in a role. -- Membership loops are not allowed (direct: `A is a member of B is a member of A` or indirect: `A is a member of B is a member of C ... is a member of A`). - -## Required privileges - -Unless a role is a member of the `admin` role, additional [privileges](#parameters) are required to manage other roles. - -- To create other roles, a role must have the [`CREATEROLE`](#create-a-role-that-can-create-other-roles-and-manage-authentication-methods-for-the-new-roles) role option. -- To add the `LOGIN` capability for other roles so that they may log in as users, a role must also have the [`CREATELOGIN`](#create-a-role-that-can-create-other-roles-and-manage-authentication-methods-for-the-new-roles) role option. -- To be able to grant or revoke membership to a role for additional roles, a member of the role must be set as a [role admin](security-reference/authorization.html#role-admin) for that role. - -## Synopsis - -
    -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/create_role.html %} -
    - -## Parameters - - - -Parameter | Description -----------|------------- -`name` | The name of the role to create. Role names are case-insensitive; must start with either a letter or underscore; must contain only letters, numbers, periods, or underscores; and must be between 1 and 63 characters.

    Note that roles and [users](create-user.html) share the same namespace and must be unique. -`WITH role_option` | Apply a [role option](#role-options) to the role. - -### Role options - -Role option | Description -------------|-------------- -`CANCELQUERY`/`NOCANCELQUERY` | Allow or disallow a role to cancel [queries](cancel-query.html) and [sessions](cancel-session.html) of other roles. Without this role option, roles can only cancel their own queries and sessions. Even with the `CANCELQUERY` role option, non-`admin` roles cannot cancel `admin` queries or sessions. This option should usually be combined with `VIEWACTIVITY` so that the role can view other roles' query and session information.

    By default, the role option is set to `NOCANCELQUERY` for all non-`admin` roles. -`CONTROLCHANGEFEED`/`NOCONTROLCHANGEFEED` | Allow or disallow a role to run [`CREATE CHANGEFEED`](create-changefeed.html) on tables they have `SELECT` privileges on.

    By default, the role option is set to `NOCONTROLCHANGEFEED` for all non-`admin` roles. -`CONTROLJOB`/`NOCONTROLJOB` | Allow or disallow a role to [pause](pause-job.html), [resume](resume-job.html), and [cancel](cancel-job.html) jobs. Non-`admin` roles cannot control jobs created by `admin` roles.

    By default, the role option is set to `NOCONTROLJOB` for all non-`admin` roles. -`CREATEDB`/`NOCREATEDB` | Allow or disallow a role to [create](create-database.html) or [rename](rename-database.html) a database. The role is assigned as the owner of the database.

    By default, the role option is set to `NOCREATEDB` for all non-`admin` roles. -`CREATELOGIN`/`NOCREATELOGIN` | Allow or disallow a role to manage authentication using the `WITH PASSWORD`, `VALID UNTIL`, and `LOGIN/NOLOGIN` role options.

    By default, the role option is set to `NOCREATELOGIN` for all non-`admin` roles. -`CREATEROLE`/`NOCREATEROLE` | Allow or disallow a new role to create, [alter](alter-role.html), and [drop](drop-role.html) other non-`admin` roles.

    By default, the role option is set to `NOCREATEROLE` for all non-`admin` roles. -`LOGIN`/`NOLOGIN` | Allow or disallow a role to log in with one of the [client authentication methods](authentication.html#client-authentication). Setting the role option to `NOLOGIN` prevents the role from logging in using any authentication method. -`MODIFYCLUSTERSETTING`/`NOMODIFYCLUSTERSETTING` | Allow or disallow a role to modify the [cluster settings](cluster-settings.html) with the `sql.defaults` prefix.

    By default, the role option is set to `NOMODIFYCLUSTERSETTING` for all non-`admin` roles. -`PASSWORD password`/`PASSWORD NULL` | The credential the role uses to [authenticate their access to a secure cluster](authentication.html#client-authentication). A password should be entered as a [string literal](sql-constants.html#string-literals). For compatibility with PostgreSQL, a password can also be entered as an identifier.

    To prevent a role from using [password authentication](authentication.html#client-authentication) and to mandate [certificate-based client authentication](authentication.html#client-authentication), [set the password as `NULL`](#prevent-a-role-from-using-password-authentication). -`SQLLOGIN`/`NOSQLLOGIN` | Allow or disallow a role to log in using the SQL CLI with one of the [client authentication methods](authentication.html#client-authentication). The role option to `NOSQLLOGIN` prevents the role from logging in using the SQL CLI with any authentication method while retaining the ability to log in to DB Console. It is possible to have both `NOSQLLOGIN` and `LOGIN` set for a role and `NOSQLLOGIN` takes precedence on restrictions.

    Without any role options all login behavior is permitted. -`VALID UNTIL` | The date and time (in the [`timestamp`](timestamp.html) format) after which the [password](#parameters) is not valid. -`VIEWACTIVITY`/`NOVIEWACTIVITY` | Allow or disallow a role to see other roles' [queries](show-statements.html) and [sessions](show-sessions.html) using `SHOW STATEMENTS`, `SHOW SESSIONS`, and the [**Statements**](ui-statements-page.html) and [**Transactions**](ui-transactions-page.html) pages in the DB Console. `VIEWACTIVITY` also permits visibility of node hostnames and IP addresses in the DB Console. With `NOVIEWACTIVITY`, the `SHOW` commands show only the role's own data, and DB Console pages redact node hostnames and IP addresses.

    By default, the role option is set to `NOVIEWACTIVITY` for all non-`admin` roles. -`VIEWCLUSTERSETTING` / `NOVIEWCLUSTERSETTING` | Allow or disallow a role to view the [cluster settings](cluster-settings.html) with `SHOW CLUSTER SETTING` or to access the [**Cluster Settings**](ui-debug-pages.html) page in the DB Console.

    By default, the role option is set to `NOVIEWCLUSTERSETTING` for all non-`admin` roles. -`VIEWACTIVITYREDACTED`/`NOVIEWACTIVITYREDACTED` | Allow or disallow a role to see other roles' queries and sessions using `SHOW STATEMENTS`, `SHOW SESSIONS`, and the Statements and Transactions pages in the DB Console. With `VIEWACTIVITYREDACTED`, a user will not have access to the usage of statements diagnostics bundle (which can contain PII information) in the DB Console, and will not be able to list queries containing [constants](sql-constants.html) for other users when using the `listSessions` endpoint through the [Cluster API](cluster-api.html). It is possible to have both `VIEWACTIVITY` and `VIEWACTIVITYREDACTED`, and `VIEWACTIVITYREDACTED` takes precedence on restrictions. If the user has `VIEWACTIVITY` but doesn't have `VIEWACTIVITYREDACTED`, they will be able to see DB Console pages and have access to the statements diagnostics bundle.

    By default, the role option is set to `NOVIEWACTIVITYREDACTED` for all non-`admin` roles. - -## Examples - -To run the following examples, [start a secure single-node cluster](cockroach-start-single-node.html) and use the built-in SQL shell: - -~~~ shell -$ cockroach sql --certs-dir=certs -~~~ - -~~~ sql -> SHOW ROLES; -~~~ - -~~~ -username | options | member_of ----------+---------+------------ -admin | | {} -root | | {admin} -(2 rows) -~~~ - -{{site.data.alerts.callout_info}} -The following statements are run by the `root` user that is a member of the `admin` role and has `ALL` privileges. -{{site.data.alerts.end}} - -### Create a role - -Role names are case-insensitive; must start with a letter, number, or underscore; must contain only letters, numbers, periods, or underscores; and must be between 1 and 63 characters. - -~~~ sql -root@:26257/defaultdb> CREATE ROLE no_options; -~~~ - -~~~ sql -root@:26257/defaultdb> SHOW ROLES; -~~~ - -~~~ - username | options | member_of - ----------+---------+------------ -admin | | {} -no_options | NOLOGIN | {} -root | | {admin} -(3 rows) -~~~ - -After creating roles, you must [grant them privileges to databases](grant.html). - -### Create a role that can log in to the database - -~~~ sql -root@:26257/defaultdb> CREATE ROLE can_login WITH LOGIN PASSWORD '$tr0nGpassW0rD' VALID UNTIL '2021-10-10'; -~~~ - -~~~ sql -root@:26257/defaultdb> SHOW ROLES; -~~~ - -~~~ - username | options | member_of ------------+---------------------------------------+------------ -admin | | {} -can_login | VALID UNTIL=2021-10-10 00:00:00+00:00 | {} -no_options | NOLOGIN | {} -root | | {admin} -(4 rows) -~~~ - -### Prevent a role from using password authentication - -The following statement prevents the role from using password authentication and mandates certificate-based client authentication: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE ROLE no_password WITH PASSWORD NULL; -~~~ - -~~~ sql -root@:26257/defaultdb> SHOW ROLES; -~~~ - -~~~ - username | options | member_of ------------+---------------------------------------+------------ -admin | | {} -can_login | VALID UNTIL=2021-10-10 00:00:00+00:00 | {} -no_options | NOLOGIN | {} -no_password| NOLOGIN | {} -root | | {admin} -(5 rows) -~~~ - -### Create a role that can create other roles and manage authentication methods for the new roles - -The following example allows the role to [create other users](create-user.html) and [manage authentication methods](authentication.html#client-authentication) for them: - -~~~ sql -root@:26257/defaultdb> CREATE ROLE can_create_role WITH CREATEROLE CREATELOGIN; -~~~ - -~~~ sql -root@:26257/defaultdb> SHOW ROLES; -~~~ - -~~~ - username | options | member_of -----------------+---------------------------------------+------------ -admin | | {} -can_create_role | CREATELOGIN, CREATEROLE, NOLOGIN | {} -can_login | VALID UNTIL=2021-10-10 00:00:00+00:00 | {} -no_options | NOLOGIN | {} -no_password | NOLOGIN | {} -root | | {admin} -(6 rows) -~~~ - -### Create a role that can create and rename databases - -The following example allows the role to [create](create-database.html) or [rename](rename-database.html) databases: - -~~~ sql -root@:26257/defaultdb> CREATE ROLE can_create_db WITH CREATEDB; -~~~ - -~~~ sql -root@:26257/defaultdb> SHOW ROLES; -~~~ - -~~~ - username | options | member_of -----------------------+---------------------------------------+------------ -admin | | {} -can_create_db | CREATEDB, NOLOGIN | {} -can_create_role | CREATELOGIN, CREATEROLE, NOLOGIN | {} -can_login | VALID UNTIL=2021-10-10 00:00:00+00:00 | {} -no_options | NOLOGIN | {} -no_password | NOLOGIN | {} -root | | {admin} -(7 rows) -~~~ - -### Create a role that can pause, resume, and cancel non-admin jobs - -The following example allows the role to [pause](pause-job.html), [resume](resume-job.html), and [cancel](cancel-job.html) jobs: - -~~~ sql -root@:26257/defaultdb> CREATE ROLE can_control_job WITH CONTROLJOB; -~~~ - -~~~ sql -root@:26257/defaultdb> SHOW ROLES; -~~~ - -~~~ - username | options | member_of -----------------------+---------------------------------------+------------ -admin | | {} -can_control_job | CONTROLJOB, NOLOGIN | {} -can_create_db | CREATEDB, NOLOGIN | {} -can_create_role | CREATELOGIN, CREATEROLE, NOLOGIN | {} -can_login | VALID UNTIL=2021-10-10 00:00:00+00:00 | {} -manage_auth_for_roles | CREATELOGIN, NOLOGIN | {} -no_options | NOLOGIN | {} -no_password | NOLOGIN | {} -root | | {admin} -(8 rows) -~~~ - -### Create a role that can see and cancel non-admin queries and sessions - -The following example allows the role to cancel [queries](cancel-query.html) and [sessions](cancel-session.html) for other non-`admin` roles: - -~~~ sql -root@:26257/defaultdb> CREATE ROLE can_manage_queries WITH CANCELQUERY VIEWACTIVITY; -~~~ - -~~~ sql -root@:26257/defaultdb> SHOW ROLES; -~~~ - -~~~ - username | options | member_of -----------------------+---------------------------------------+------------ -admin | | {} -can_control_job | CONTROLJOB, NOLOGIN | {} -can_create_db | CREATEDB, NOLOGIN | {} -can_create_role | CREATELOGIN, CREATEROLE, NOLOGIN | {} -can_login | VALID UNTIL=2021-10-10 00:00:00+00:00 | {} -can_manage_queries | CANCELQUERY, NOLOGIN, VIEWACTIVITY | {} -no_options | NOLOGIN | {} -no_password | NOLOGIN | {} -root | | {admin} -(9 rows) -~~~ - -### Create a role that can control changefeeds - -The following example allows the role to run [`CREATE CHANGEFEED`](create-changefeed.html): - -~~~ sql -root@:26257/defaultdb> CREATE ROLE can_control_changefeed WITH CONTROLCHANGEFEED; -~~~ - -~~~ sql -root@:26257/defaultdb> SHOW ROLES; -~~~ - -~~~ - username | options | member_of ------------------------+---------------------------------------+------------ -admin | | {} -can_control_changefeed | CONTROLCHANGEFEED, NOLOGIN | {} -can_control_job | CONTROLJOB, NOLOGIN | {} -can_create_db | CREATEDB, NOLOGIN | {} -can_create_role | CREATELOGIN, CREATEROLE, NOLOGIN | {} -can_login | VALID UNTIL=2021-10-10 00:00:00+00:00 | {} -can_manage_queries | CANCELQUERY, NOLOGIN, VIEWACTIVITY | {} -no_options | NOLOGIN | {} -no_password | NOLOGIN | {} -root | | {admin} -(10 rows) -~~~ - -### Create a role that can modify cluster settings - -The following example allows the role to modify [cluster settings](cluster-settings.html): - -~~~ sql -root@:26257/defaultdb> CREATE ROLE can_modify_cluster_setting WITH MODIFYCLUSTERSETTING; -~~~ - -~~~ sql -root@:26257/defaultdb> SHOW ROLES; -~~~ - -~~~ - username | options | member_of ----------------------------+---------------------------------------+------------ -admin | | {} -can_control_changefeed | CONTROLCHANGEFEED, NOLOGIN | {} -can_control_job | CONTROLJOB, NOLOGIN | {} -can_create_db | CREATEDB, NOLOGIN | {} -can_create_role | CREATELOGIN, CREATEROLE, NOLOGIN | {} -can_login | VALID UNTIL=2021-10-10 00:00:00+00:00 | {} -can_manage_queries | CANCELQUERY, NOLOGIN, VIEWACTIVITY | {} -can_modify_cluster_setting | MODIFYCLUSTERSETTING, NOLOGIN | {} -no_options | NOLOGIN | {} -no_password | NOLOGIN | {} -root | | {admin} -(11 rows) -~~~ - -## See also - -- [Authorization](authorization.html) -- [Authorization Best Practices](security-reference/authorization.html#authorization-best-practices) -- [`DROP ROLE`](drop-role.html) -- [`GRANT`](grant.html) -- [`REVOKE`](revoke.html) -- [`SHOW ROLES`](show-roles.html) -- [`SHOW USERS`](show-users.html) -- [`SHOW GRANTS`](show-grants.html) -- [SQL Statements](sql-statements.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v22.1/create-schedule-for-backup.md b/src/current/v22.1/create-schedule-for-backup.md deleted file mode 100644 index 88e9e353833..00000000000 --- a/src/current/v22.1/create-schedule-for-backup.md +++ /dev/null @@ -1,243 +0,0 @@ ---- -title: CREATE SCHEDULE FOR BACKUP -summary: The CREATE SCHEDULE FOR BACKUP statement creates a schedule for periodic backups. -toc: true -docs_area: reference.sql ---- - - The `CREATE SCHEDULE FOR BACKUP` [statement](sql-statements.html) creates a schedule for periodic [backups](backup.html). - -For more information about creating, managing, monitoring, and restoring from a scheduled backup, see [Manage a Backup Schedule](manage-a-backup-schedule.html). - -{{site.data.alerts.callout_info}} -Core users can only use backup scheduling for [full backups](#create-a-schedule-for-full-backups-only-core) of clusters, databases, or tables. If you do not specify the `FULL BACKUP ALWAYS` clause when you schedule a backup, you will receive a warning that the schedule will only run full backups. - -To use the other backup features, you need an [Enterprise license](enterprise-licensing.html). -{{site.data.alerts.end}} - -## Required privileges - -- Only members of the [`admin` role](security-reference/authorization.html#default-roles) can run `CREATE SCHEDULE FOR BACKUP`. By default, the `root` user belongs to the `admin` role. -- `BACKUP` requires full read and write (including delete and overwrite) permissions to its target destination. - -## Synopsis - -~~~ -CREATE SCHEDULE [IF NOT EXISTS]
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Fault Tolerance Goals3 nodes5 nodes9 nodes
    1 NodeRF = 3RF = 3RF = 3
    1 AZRF = 3RF = 3RF = 3
    2 NodesNot possibleRF = 5RF = 5
    AZ + NodeNot possibleNot possibleRF = 9
    2 AZNot possibleNot possibleNot possible
    - -To be able to survive 2+ availability zones failing, scale to a [multi-region](#multi-region-survivability-planning) deployment. - -### Single-region recovery - -For hardware failures in a single-region cluster, the recovery actions vary and depend on the type of infrastructure used. - -For example, consider a cloud-deployed CockroachDB cluster with the following setup: - -- Single-region -- 3 nodes -- A node in each availability zone (i.e., 3 AZs) -- Replication factor of 3 - -The table below describes what actions to take to recover from various hardware failures in this example cluster: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    FailureAvailabilityConsequenceAction to Take
    1 DiskFewer resources are available. Some data will be under-replicated until the failed nodes are marked dead.

    Once marked dead, data is replicated to other nodes and the cluster remains healthy. -
    Restart the node with a new disk.
    1 NodeIf the node or AZ becomes unavailable, check the Overview dashboard on the DB Console: -
    1 AZ
    2 NodesXCluster is unavailable.Restart 1 of the 2 nodes that are down to regain quorum.

    If you can’t recover at least 1 node, contact Cockroach Labs support for assistance.
    1 AZ + 1 NodeXCluster is unavailable.Restart the node that is down to regain quorum. When the AZ comes back online, try restarting the node.

    If you can’t recover at least 1 node, contact Cockroach Labs support for assistance.
    2 AZXCluster is unavailable.When the AZ comes back online, try restarting at least 1 of the nodes.

    You can also contact Cockroach Labs support for assistance.
    3 NodesXCluster is unavailable.Restart 2 of the 3 nodes that are down to regain quorum.

    If you can’t recover 2 of the 3 failed nodes, contact Cockroach Labs support for assistance.
    1 RegionXCluster is unavailable.

    Potential data loss between last backup and time of outage if the region and nodes did not come back online.
    When the region comes back online, try restarting the nodes in the cluster.

    If the region does not come back online and nodes are lost or destroyed, try restoring the latest cluster backup into a new cluster.

    You can also contact Cockroach Labs support for assistance.
    - -{{site.data.alerts.callout_info}} -When using Kubernetes, recovery actions happen automatically in many cases and no action needs to be taken. -{{site.data.alerts.end}} - -### Multi-region survivability planning - -{{site.data.alerts.callout_success}} - By default, every [multi-region database](multiregion-overview.html) has a [zone-level survival goal](multiregion-overview.html#survival-goals) associated with it. The survival goal setting provides an abstraction that handles the low-level details of replica placement to ensure your desired fault tolerance. The information below is still useful for legacy deployments. -{{site.data.alerts.end}} - -The table below shows the replication factor (RF) needed to achieve the listed fault tolerance (e.g., survive 1 failed node) for a multi-region, cloud-deployed cluster with 3 availability zones (AZ) per region and one node in each AZ: - -{{site.data.alerts.callout_danger}} -The chart below describes the CockroachDB default behavior when locality flags are correctly set. It does not use geo-partitioning or a specific [topology pattern](topology-patterns.html). For a multi-region cluster in production, we do not recommend using the default behavior, as the cluster's performance will be negatively affected. -{{site.data.alerts.end}} - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Fault Tolerance Goals3 Regions
    (9 Nodes Total)
    4 Regions
    (12 Nodes Total)
    5 Regions
    (15 Nodes Total)
    1 NodeRF = 3RF = 3RF = 3
    1 AZRF = 3RF = 3RF = 3
    1 RegionRF = 3RF = 3RF = 3
    2 NodesRF = 5RF = 5RF = 5
    1 Region + 1 NodeRF = 9RF = 7RF = 5
    2 RegionsNot possibleNot possibleRF = 5
    2 Regions + 1 NodeNot possibleNot possibleRF = 15
    - -### Multi-region recovery - -For hardware failures in a multi-region cluster, the actions taken to recover vary and depend on the type of infrastructure used. - -For example, consider a cloud-deployed CockroachDB cluster with the following setup: - -- 3 regions -- 3 AZs per region -- 9 nodes (1 node per AZ) -- Replication factor of 3 - -The table below describes what actions to take to recover from various hardware failures in this example cluster: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    FailureAvailabilityConsequenceAction to Take
    1 DiskUnder-replicated data. Fewer resources for workload.Restart the node with a new disk.
    1 NodeIf the node or AZ becomes unavailable check the Overview dashboard on the DB Console: - -
    1 AZ
    1 RegionCheck the Overview dashboard on the DB Console. If nodes are marked Dead, decommission the nodes and add 3 new nodes in a new region. Ensure that locality flags are set correctly upon node startup.
    2 or More RegionsXCluster is unavailable.

    Potential data loss between last backup and time of outage if the region and nodes did not come back online.
    When the regions come back online, try restarting the nodes in the cluster.

    If the regions do not come back online and nodes are lost or destroyed, try restoring the latest cluster backup into a new cluster.

    You can also contact Cockroach Labs support for assistance.
    - -{{site.data.alerts.callout_info}} -When using Kubernetes, recovery actions happen automatically in many cases and no action needs to be taken. -{{site.data.alerts.end}} - -## Data failure - -When dealing with data failure due to bad actors, rogue applications, or data corruption, domain expertise is required to identify the affected rows and determine how to remedy the situation (e.g., remove the incorrectly inserted rows, insert deleted rows, etc.). However, there are a few actions that you can take for short-term remediation: - -- If you are within the garbage collection window, [run differentials](#run-differentials). -- If you have a backup file, [restore to a point in time](#restore-to-a-point-in-time). -- If your cluster is running and you do not have a backup with the data you need, [create a new backup](#create-a-new-backup). -- To [recover from corrupted data in a database or table](#recover-from-corrupted-data-in-a-database-or-table), restore the corrupted object. - -{{site.data.alerts.callout_success}} -To give yourself more time to recover and clean up the corrupted data, put your application in “read only” mode and only run [`AS OF SYSTEM TIME`](as-of-system-time.html) queries from the application. -{{site.data.alerts.end}} - -### Run differentials - -If you are within the [garbage collection window](configure-replication-zones.html#replication-zone-variables) (default is 25 hours), run [`AS OF SYSTEM TIME`](as-of-system-time.html) queries and use [`CREATE TABLE AS … SELECT * FROM`](create-table-as.html) to create comparison data and run differentials to find the offending rows to fix. - -If you are outside of the garbage collection window, you will need to use a [backup](backup.html) to run comparisons. - -### Restore to a point in time - -- If you are a core user, use a [backup](backup.html) that was taken with [`AS OF SYSTEM TIME`](as-of-system-time.html) to restore to a specific point. -- If you are an {{ site.data.products.enterprise }} user, use your [backup](backup.html) file to [restore to a point in time](take-backups-with-revision-history-and-restore-from-a-point-in-time.html) where you are certain there was no corruption. Note that the backup must have been taken with [revision history](backup.html#with-revision-history). - -### Create a new backup - -If your cluster is running, you do not have a backup that encapsulates the time you want to [restore](restore.html) to, and the data you want to recover is still in the [garbage collection window](configure-replication-zones.html#replication-zone-variables), there are two actions you can take: - -- If you are a core user, trigger a [backup](backup.html) using [`AS OF SYSTEM TIME`](as-of-system-time.html) to create a new backup that encapsulates the specific time. The `AS OF SYSTEM TIME` must be within the [garbage collection window](configure-replication-zones.html#replication-zone-variables) (default is 25 hours). -- If you are an {{ site.data.products.enterprise }} user, trigger a new [backup `with_revision_history`](take-backups-with-revision-history-and-restore-from-a-point-in-time.html) and you will have a backup you can use to restore to the desired point in time within the [garbage collection window](configure-replication-zones.html#replication-zone-variables) (default is 25 hours). - -### Recover from corrupted data in a database or table - -If you have corrupted data in a database or table, [restore](restore.html) the object from a prior [backup](backup.html). If revision history is in the backup, you can restore from a [point in time](take-backups-with-revision-history-and-restore-from-a-point-in-time.html). - -Instead of dropping the corrupted table or database, we recommend [renaming the table](rename-table.html) or [renaming the database](rename-database.html) so you have historical data to compare to later. If you drop a database, the database cannot be referenced with `AS OF SYSTEM TIME` queries (see [#51380](https://github.com/cockroachdb/cockroach/issues/51380) for more information), and you will need to take a backup that is backdated to the system time when the database still existed. - -{{site.data.alerts.callout_info}} -If the table you are restoring has foreign keys, [careful consideration](restore.html#remove-the-foreign-key-before-restore) should be applied to make sure data integrity is maintained during the restore process. -{{site.data.alerts.end}} - -## Compromised security keys - -CockroachDB maintains a secure environment for your data. However, there are bad actors who may find ways to gain access or expose important security information. In the event that this happens, there are a few things you can do to get ahead of a security issue: - -- If you have [changefeeds to cloud storage sinks](#changefeeds-to-cloud-storage), cancel the changefeed job and restart it with new access credentials. -- If you are using [{{ site.data.products.enterprise }} Encryption At Rest](#encryption-at-rest), rotate the store key(s). -- If you are using [wire encryption in flight / TLS](#wire-encryption-tls), rotate your keys. - -### Changefeeds to cloud storage - -1. [Cancel the changefeed job](cancel-job.html) immediately and [record the high water timestamp](monitor-and-debug-changefeeds.html#monitor-a-changefeed) for where the changefeed was stopped. -2. Remove the access keys from the identity management system of your cloud provider and replace with a new set of access keys. -3. [Create a new changefeed](create-changefeed.html#start-a-new-changefeed-where-another-ended) with the new access credentials using the last high water timestamp. - -### Encryption at rest - -If you believe the user-defined store keys have been compromised, quickly attempt to rotate your store keys that are being used for your encryption at rest setup. If this key has already been compromised and the store keys were rotated by a bad actor, the cluster should be wiped if possible and [restored](restore.html) from a prior backup. - -If the compromised keys were not rotated by a bad actor, quickly attempt to [rotate the store key](security-reference/encryption.html#rotating-keys) by restarting each of the nodes with the old key and the new key. For an example on how to do this, see [Encryption](encryption.html#changing-encryption-algorithm-or-keys). - -Once all of the nodes are restarted with the new key, put in a request to revoke the old key from the Certificate Authority. - -{{site.data.alerts.callout_info}} -CockroachDB does not allow prior store keys to be used again. -{{site.data.alerts.end}} - -### Wire Encryption / TLS - -As a best practice, [keys should be rotated](rotate-certificates.html). In the event that keys have been compromised, quickly attempt to rotate your keys. This can include rotating node certificates, client certificates, and the CA certificate. - -## See also - -- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html) -- [Back up and Restore Data](take-full-and-incremental-backups.html) -- [Topology Patterns](topology-patterns.html) -- [Production Checklist](recommended-production-settings.html) diff --git a/src/current/v22.1/drop-column.md b/src/current/v22.1/drop-column.md deleted file mode 100644 index 0f694a4fdf7..00000000000 --- a/src/current/v22.1/drop-column.md +++ /dev/null @@ -1,215 +0,0 @@ ---- -title: DROP COLUMN -summary: Use the ALTER COLUMN statement to remove columns from tables. -toc: true -docs_area: reference.sql ---- - -The `DROP COLUMN` [statement](sql-statements.html) is part of `ALTER TABLE` and removes columns from a table. - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -{{site.data.alerts.callout_danger}} -When used in an explicit transaction combined with other schema changes to the same table, `DROP COLUMN` can result in data loss if one of the other schema changes fails or is canceled. To work around this, move the `DROP COLUMN` statement to its own explicit transaction or run it in a single statement outside the existing transaction. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} - By default, `DROP COLUMN` drops any [indexes](indexes.html) on the column being dropped, and any indexes that reference the column, including indexes with [`STORING` clauses](create-index.html#store-columns) that reference the column. -{{site.data.alerts.end}} - -{% include {{ page.version.version }}/sql/combine-alter-table-commands.md %} - -## Synopsis - -
    {% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/drop_column.html %}
    - -## Required privileges - -The user must have the `CREATE` [privilege](security-reference/authorization.html#managing-privileges) on the table. - -## Parameters - - Parameter | Description ------------|------------- - `table_name` | The name of the table with the column you want to drop. - `name` | The name of the column you want to drop.

    When a column with a `CHECK` constraint is dropped, the `CHECK` constraint is also dropped. - `CASCADE` | Drop the column even if objects (such as [views](views.html)) depend on it; drop the dependent objects, as well. `CASCADE` will drop a column with a foreign key constraint if it is the only column in the reference.

    `CASCADE` does not list the objects it drops, so should be used cautiously.

    `CASCADE` is not required to drop an indexed column, or a column that is referenced by an index. By default, `DROP COLUMN` drops any [indexes](indexes.html) on the column being dropped, and any indexes that reference the column, including [partial indexes](partial-indexes.html) with predicates that reference the column and indexes with [`STORING` clauses](create-index.html#store-columns) that reference the column. - `RESTRICT` | *(Default)* Do not drop the column if any objects (such as [views](views.html)) depend on it. - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Known limitations - -- CockroachDB prevents a column from being dropped if it is referenced by a [partial index](partial-indexes.html) predicate. To drop such a column, the partial indexes need to be dropped first using [`DROP INDEX`](drop-index.html). See [tracking issue](https://github.com/cockroachdb/cockroach/issues/97813). - -## Examples - -{% include {{page.version.version}}/sql/movr-statements.md %} - -### Drop a column - -If you no longer want a column in a table, you can drop it. - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM users; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden ---------------+-----------+-------------+----------------+-----------------------+-----------+------------ - id | UUID | false | NULL | | {primary} | false - city | VARCHAR | false | NULL | | {primary} | false - name | VARCHAR | true | NULL | | {primary} | false - address | VARCHAR | true | NULL | | {primary} | false - credit_card | VARCHAR | true | NULL | | {primary} | false -(5 rows) -~~~ - -If there is data in the table, the `sql_safe_updates` [session variable](set-vars.html) must be set to `false`. - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE users DROP COLUMN credit_card; -~~~ - -~~~ -ERROR: rejected (sql_safe_updates = true): ALTER TABLE DROP COLUMN will remove all data in that column -SQLSTATE: 01000 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SET sql_safe_updates = false; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE users DROP COLUMN credit_card; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM users; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden ---------------+-----------+-------------+----------------+-----------------------+-----------+------------ - id | UUID | false | NULL | | {primary} | false - city | VARCHAR | false | NULL | | {primary} | false - name | VARCHAR | true | NULL | | {primary} | false - address | VARCHAR | true | NULL | | {primary} | false -(4 rows) -~~~ - -### Prevent dropping columns with dependent objects (`RESTRICT`) - -If the column has dependent objects, such as [views](views.html), CockroachDB will not drop the column by default. However, if you want to be sure of the behavior you can include the `RESTRICT` clause. - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE VIEW expensive_rides AS SELECT id, city FROM rides WHERE revenue > 90; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE rides DROP COLUMN revenue RESTRICT; -~~~ - -~~~ -ERROR: cannot drop column "revenue" because view "expensive_rides" depends on it -SQLSTATE: 2BP01 -HINT: you can drop expensive_rides instead. -~~~ - -### Drop a column and its dependent objects (`CASCADE`) - -If you want to drop the column and all of its dependent options, include the `CASCADE` clause. - -{{site.data.alerts.callout_danger}} -CASCADE does not list objects it drops, so should be used cautiously. -{{site.data.alerts.end}} - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE expensive_rides; -~~~ - -~~~ - table_name | create_statement -------------------+------------------------------------------------------------------------------------------------------------- - expensive_rides | CREATE VIEW public.expensive_rides (id, city) AS SELECT id, city FROM movr.public.rides WHERE revenue > 90 -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE rides DROP COLUMN revenue CASCADE; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE expensive_rides; -~~~ - -~~~ -ERROR: relation "expensive_rides" does not exist -SQLSTATE: 42P01 -~~~ - -### Drop an indexed column - - `DROP COLUMN` drops a column and any indexes on the column being dropped. - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE INDEX start_end_idx ON rides(start_time, end_time); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> WITH x AS (SHOW INDEXES FROM rides) SELECT * FROM x WHERE index_name='start_end_idx'; -~~~ - -~~~ - table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit --------------+---------------+------------+--------------+-------------+-----------+---------+----------- - rides | start_end_idx | true | 1 | start_time | ASC | false | false - rides | start_end_idx | true | 2 | end_time | ASC | false | false - rides | start_end_idx | true | 3 | city | ASC | false | true - rides | start_end_idx | true | 4 | id | ASC | false | true -(4 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE rides DROP COLUMN start_time; -~~~ - -~~~ -NOTICE: the data for dropped indexes is reclaimed asynchronously -HINT: The reclamation delay can be customized in the zone configuration for the table. -ALTER TABLE -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> WITH x AS (SHOW INDEXES FROM rides) SELECT * FROM x WHERE index_name='start_end_idx'; -~~~ - -~~~ - table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit --------------+------------+------------+--------------+-------------+-----------+---------+----------- -(0 rows) -~~~ - -## See also - -- [`DROP CONSTRAINT`](drop-constraint.html) -- [`DROP INDEX`](drop-index.html) -- [`ALTER TABLE`](alter-table.html) -- [`SHOW JOBS`](show-jobs.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v22.1/drop-constraint.md b/src/current/v22.1/drop-constraint.md deleted file mode 100644 index 9992b14479e..00000000000 --- a/src/current/v22.1/drop-constraint.md +++ /dev/null @@ -1,159 +0,0 @@ ---- -title: DROP CONSTRAINT -summary: Use the ALTER CONSTRAINT statement to remove constraints from columns. -toc: true -docs_area: reference.sql ---- - -The `DROP CONSTRAINT` [statement](sql-statements.html) is part of [`ALTER TABLE`](alter-table.html) and removes [`CHECK`](check.html) and [`FOREIGN KEY`](foreign-key.html) constraints from columns. - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -[`PRIMARY KEY`](primary-key.html) constraints can be dropped with `DROP CONSTRAINT` if an [`ADD CONSTRAINT`](add-constraint.html) statement follows the `DROP CONSTRAINT` statement in the same transaction. - -{{site.data.alerts.callout_success}} -When you change a primary key with [`ALTER TABLE ... ALTER PRIMARY KEY`](alter-primary-key.html), the old primary key index becomes a secondary index. If you do not want the old primary key to become a secondary index, use `DROP CONSTRAINT`/[`ADD CONSTRAINT`](add-constraint.html) to change the primary key. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -For information about removing other constraints, see [Constraints: Remove Constraints](constraints.html#remove-constraints). -{{site.data.alerts.end}} - -{% include {{ page.version.version }}/sql/combine-alter-table-commands.md %} - -## Synopsis - -
    {% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/drop_constraint.html %}
    - -## Required privileges - -The user must have the `CREATE` [privilege](security-reference/authorization.html#managing-privileges) on the table. - -## Parameters - - Parameter | Description ------------|------------- - `table_name` | The name of the table with the constraint you want to drop. - `name` | The name of the constraint you want to drop. - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -{% include {{page.version.version}}/sql/movr-statements.md %} - -### Drop a foreign key constraint - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CONSTRAINTS FROM vehicles; -~~~ - -~~~ - table_name | constraint_name | constraint_type | details | validated --------------+-------------------+-----------------+---------------------------------------------------------+------------ - vehicles | fk_city_ref_users | FOREIGN KEY | FOREIGN KEY (city, owner_id) REFERENCES users(city, id) | true - vehicles | vehicles_pkey | PRIMARY KEY | PRIMARY KEY (city ASC, id ASC) | true -(2 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE vehicles DROP CONSTRAINT fk_city_ref_users; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CONSTRAINTS FROM vehicles; -~~~ - -~~~ - table_name | constraint_name | constraint_type | details | validated --------------+-----------------+-----------------+--------------------------------+------------ - vehicles | vehicles_pkey | PRIMARY KEY | PRIMARY KEY (city ASC, id ASC) | true -(1 row) -~~~ - -### Drop and add a primary key constraint - -When you change a primary key with [`ALTER TABLE ... ALTER PRIMARY KEY`](alter-primary-key.html), the old primary key index becomes a secondary index. If you do not want the old primary key to become a secondary index when changing a primary key, you can use `DROP CONSTRAINT`/[`ADD CONSTRAINT`](add-constraint.html) instead. - -Suppose that you want to add `name` to the composite primary key of the `users` table. - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE users; -~~~ - -~~~ - table_name | create_statement --------------+-------------------------------------------------------------- - users | CREATE TABLE users ( - | id UUID NOT NULL, - | city VARCHAR NOT NULL, - | name VARCHAR NULL, - | address VARCHAR NULL, - | credit_card VARCHAR NULL, - | CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - | FAMILY "primary" (id, city, name, address, credit_card) - | ) -(1 row) -~~~ - -First, add a [`NOT NULL`](not-null.html) constraint to the `name` column with [`ALTER COLUMN`](alter-column.html). - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE users ALTER COLUMN name SET NOT NULL; -~~~ - -Then, in the same transaction, `DROP` the old `"primary"` constraint and [`ADD`](add-constraint.html) the new one: - -{% include_cached copy-clipboard.html %} -~~~ sql -> BEGIN; -> ALTER TABLE users DROP CONSTRAINT "primary"; -> ALTER TABLE users ADD CONSTRAINT "primary" PRIMARY KEY (city, name, id); -> COMMIT; -~~~ - -~~~ -NOTICE: primary key changes are finalized asynchronously; further schema changes on this table may be restricted until the job completes -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE users; -~~~ - -~~~ - table_name | create_statement --------------+--------------------------------------------------------------------- - users | CREATE TABLE users ( - | id UUID NOT NULL, - | city VARCHAR NOT NULL, - | name VARCHAR NOT NULL, - | address VARCHAR NULL, - | credit_card VARCHAR NULL, - | CONSTRAINT "primary" PRIMARY KEY (city ASC, name ASC, id ASC), - | FAMILY "primary" (id, city, name, address, credit_card) - | ) -(1 row) -~~~ - -Using [`ALTER PRIMARY KEY`](alter-primary-key.html) would have created a `UNIQUE` secondary index called `users_city_id_key`. Instead, there is just one index for the primary key constraint. - -## See also - -- [`ADD CONSTRAINT`](add-constraint.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) -- [`RENAME CONSTRAINT`](rename-constraint.html) -- [`VALIDATE CONSTRAINT`](validate-constraint.html) -- [`DROP COLUMN`](drop-column.html) -- [`DROP INDEX`](drop-index.html) -- [`ALTER TABLE`](alter-table.html) -- [`SHOW JOBS`](show-jobs.html) -- ['ALTER PRIMARY KEY'](alter-primary-key.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v22.1/drop-database.md b/src/current/v22.1/drop-database.md deleted file mode 100644 index 056357819f0..00000000000 --- a/src/current/v22.1/drop-database.md +++ /dev/null @@ -1,146 +0,0 @@ ---- -title: DROP DATABASE -summary: The DROP DATABASE statement removes a database and all its objects from a CockroachDB cluster. -toc: true -docs_area: reference.sql ---- - -The `DROP DATABASE` [statement](sql-statements.html) removes a database and all its objects from a CockroachDB cluster. - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -{% include {{ page.version.version }}/misc/declarative-schema-changer-note.md %} - -## Required privileges - -The user must have the `DROP` [privilege](security-reference/authorization.html#managing-privileges) on the database and on all tables in the database. - -## Synopsis - -
    {% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/drop_database.html %}
    - -## Parameters - -Parameter | Description -----------|------------ -`IF EXISTS` | Drop the database if it exists; if it does not exist, do not return an error. -`name` | The name of the database you want to drop. You cannot drop a database if it is set as the [current database](sql-name-resolution.html#current-database) or if [`sql_safe_updates = true`](set-vars.html). -`CASCADE` | _(Default)_ Drop all tables and views in the database as well as all objects (such as [constraints](constraints.html) and [views](views.html)) that depend on those tables.

    `CASCADE` does not list objects it drops, so should be used cautiously. -`RESTRICT` | Do not drop the database if it contains any [tables](create-table.html) or [views](create-view.html). - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -{% include {{page.version.version}}/sql/movr-statements.md %} - -### Drop a database and its objects (`CASCADE`) - -For non-interactive sessions (e.g., client applications), `DROP DATABASE` applies the `CASCADE` option by default, which drops all tables and views in the database as well as all objects (such as [constraints](constraints.html) and [views](views.html)) that depend on those tables. - -For interactive sessions from the [built-in SQL client](cockroach-sql.html), either the `CASCADE` option must be set explicitly or the `--unsafe-updates` flag must be set when starting the shell. - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM movr; -~~~ - -~~~ - schema_name | table_name | type | estimated_row_count ---------------+----------------------------+-------+---------------------- - public | promo_codes | table | 1000 - public | rides | table | 500 - public | user_promo_codes | table | 0 - public | users | table | 50 - public | vehicle_location_histories | table | 1000 - public | vehicles | table | 15 -(6 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP DATABASE movr; -~~~ - -~~~ -ERROR: rejected (sql_safe_updates = true): DROP DATABASE on current database -SQLSTATE: 01000 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> USE defaultdb; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP DATABASE movr; -~~~ - -~~~ -ERROR: rejected (sql_safe_updates = true): DROP DATABASE on non-empty database without explicit CASCADE -SQLSTATE: 01000 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP DATABASE movr CASCADE; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM movr; -~~~ - -~~~ -ERROR: target database or schema does not exist -SQLSTATE: 3F000 -~~~ - -### Prevent dropping a non-empty database (`RESTRICT`) - -When a database is not empty, the `RESTRICT` option prevents the database from being dropped: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM movr; -~~~ - -~~~ - schema_name | table_name | type | estimated_row_count ---------------+----------------------------+-------+---------------------- - public | promo_codes | table | 1000 - public | rides | table | 500 - public | user_promo_codes | table | 0 - public | users | table | 50 - public | vehicle_location_histories | table | 1000 - public | vehicles | table | 15 -(6 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> USE defaultdb; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP DATABASE movr RESTRICT; -~~~ - -~~~ -ERROR: database "movr" is not empty and RESTRICT was specified -SQLSTATE: 2BP01 -~~~ - -## See also - -- [`CREATE DATABASE`](create-database.html) -- [`SHOW DATABASES`](show-databases.html) -- [`RENAME DATABASE`](rename-database.html) -- [`SET DATABASE`](set-vars.html) -- [`SHOW JOBS`](show-jobs.html) -- [SQL Statements](sql-statements.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v22.1/drop-index.md b/src/current/v22.1/drop-index.md deleted file mode 100644 index 71caf21fcd9..00000000000 --- a/src/current/v22.1/drop-index.md +++ /dev/null @@ -1,196 +0,0 @@ ---- -title: DROP INDEX -summary: The DROP INDEX statement removes indexes from tables. -toc: true -docs_area: reference.sql ---- - -The `DROP INDEX` [statement](sql-statements.html) removes indexes from tables. - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Synopsis - -
    {% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/drop_index.html %}
    - -## Required privileges - -The user must have the `CREATE` [privilege](security-reference/authorization.html#managing-privileges) on each specified table. - -## Parameters - - Parameter | Description ------------|------------- - `IF EXISTS` | Drop the named indexes if they exist; if they do not exist, do not return an error. - `table_name` | The name of the table with the index you want to drop. Find table names with [`SHOW TABLES`](show-tables.html). - `index_name` | The name of the index you want to drop. Find index names with [`SHOW INDEX`](show-index.html).

    You cannot drop a table's primary index. - `CASCADE` | Drop all objects (such as [constraints](constraints.html)) that depend on the indexes. `CASCADE` does not list objects it drops, so should be used cautiously.

    To drop an index created with [`CREATE UNIQUE INDEX`](create-index.html#unique-indexes), you do not need to use `CASCADE`. - `RESTRICT` | _(Default)_ Do not drop the indexes if any objects (such as [constraints](constraints.html)) depend on them. - `CONCURRENTLY` | Optional, no-op syntax for PostgreSQL compatibility. All indexes are dropped concurrently in CockroachDB. - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -{% include {{page.version.version}}/sql/movr-statements.md %} - -### Remove an index with no dependencies - -{{site.data.alerts.callout_danger}} -{% include {{ page.version.version }}/known-limitations/drop-unique-index-from-create-table.md %} -{{site.data.alerts.end}} - -Suppose you create an index on the `name` and `city` columns of the `users` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE INDEX ON users (name, city); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW INDEXES FROM users; -~~~ - -~~~ - table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit --------------+---------------------+------------+--------------+-------------+-----------+---------+----------- - users | users_pkey | false | 1 | city | ASC | false | false - users | users_pkey | false | 2 | id | ASC | false | false - users | users_pkey | false | 3 | name | N/A | true | false - users | users_pkey | false | 4 | address | N/A | true | false - users | users_pkey | false | 5 | credit_card | N/A | true | false - users | users_name_city_idx | true | 1 | name | ASC | false | false - users | users_name_city_idx | true | 2 | city | ASC | false | false - users | users_name_city_idx | true | 3 | id | ASC | false | true -(8 rows) -~~~ - -You can drop this index with the `DROP INDEX` statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP INDEX users@users_name_city_idx; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW INDEXES FROM users; -~~~ - -~~~ - table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit --------------+------------+------------+--------------+-------------+-----------+---------+----------- - users | users_pkey | false | 1 | city | ASC | false | false - users | users_pkey | false | 2 | id | ASC | false | false - users | users_pkey | false | 3 | name | N/A | true | false - users | users_pkey | false | 4 | address | N/A | true | false - users | users_pkey | false | 5 | credit_card | N/A | true | false -(5 rows) -~~~ - -### Remove an index and dependent objects with `CASCADE` - -{{site.data.alerts.callout_danger}} -CASCADE drops all dependent objects without listing them, which can lead to inadvertent and difficult-to-recover losses. To avoid potential harm, we recommend dropping objects individually in most cases. -{{site.data.alerts.end}} - -Suppose you create a [`UNIQUE`](unique.html) constraint on the `id` and `name` columns of the `users` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE users ADD CONSTRAINT id_name_unique UNIQUE (id, name); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CONSTRAINTS from users; -~~~ - -~~~ - table_name | constraint_name | constraint_type | details | validated --------------+-----------------+-----------------+--------------------------------+------------ - users | id_name_unique | false | 1 | id | ASC | false | false - users | id_name_unique | false | 2 | name | ASC | false | false - users | id_name_unique | false | 3 | city | ASC | false | true - users | users_pkey | false | 1 | city | ASC | false | false - users | users_pkey | false | 2 | id | ASC | false | false - users | users_pkey | false | 3 | name | N/A | true | false - users | users_pkey | false | 4 | address | N/A | true | false - users | users_pkey | false | 5 | credit_card | N/A | true | false -(8 rows) -~~~ - -If no index exists on `id` and `name`, CockroachDB automatically creates an index: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW INDEXES from users; -~~~ - -~~~ - table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit --------------+----------------+------------+--------------+-------------+-----------+---------+----------- - users | users_pkey | false | 1 | city | ASC | false | false - users | users_pkey | false | 2 | id | ASC | false | false - users | id_name_unique | false | 1 | id | ASC | false | false - users | id_name_unique | false | 2 | name | ASC | false | false - users | id_name_unique | false | 3 | city | ASC | false | true -(5 rows) -~~~ - -The `UNIQUE` constraint is dependent on the `id_name_unique` index, so you cannot drop the index with a simple `DROP INDEX` statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP INDEX id_name_unique; -~~~ - -~~~ -ERROR: index "id_name_unique" is in use as unique constraint -SQLSTATE: 2BP01 -HINT: use CASCADE if you really want to drop it. -~~~ - -To drop an index and its dependent objects, you can use `CASCADE`: - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP INDEX id_name_unique CASCADE; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW INDEXES from users; -~~~ - -~~~ - table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit --------------+------------+------------+--------------+-------------+-----------+---------+----------- - users | users_pkey | false | 1 | city | ASC | false | false - users | users_pkey | false | 2 | id | ASC | false | false - users | users_pkey | false | 3 | name | N/A | true | false - users | users_pkey | false | 4 | address | N/A | true | false - users | users_pkey | false | 5 | credit_card | N/A | true | false -(5 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CONSTRAINTS from users; -~~~ - -~~~ - table_name | constraint_name | constraint_type | details | validated --------------+-----------------+-----------------+--------------------------------+------------ - users | users_pkey | PRIMARY KEY | PRIMARY KEY (city ASC, id ASC) | true -(1 row) -~~~ - -## See also - -- [Indexes](indexes.html) -- [Online Schema Changes](online-schema-changes.html) -- [`SHOW JOBS`](show-jobs.html) diff --git a/src/current/v22.1/drop-region.md b/src/current/v22.1/drop-region.md deleted file mode 100644 index ba6dfa6bd5f..00000000000 --- a/src/current/v22.1/drop-region.md +++ /dev/null @@ -1,227 +0,0 @@ ---- -title: DROP REGION -summary: The DROP REGION statement drops a region from a multi-region database. -toc: true -docs_area: reference.sql ---- - - The `ALTER DATABASE .. DROP REGION` [statement](sql-statements.html) drops a [region](multiregion-overview.html#database-regions) from a [multi-region database](multiregion-overview.html). While CockroachDB processes an index modification or changing a table to or from a [`REGIONAL BY ROW` table](multiregion-overview.html#regional-by-row-tables), attempting to drop a region from the database containing that `REGIONAL BY ROW` table will produce an error. Similarly, while this statement is running, all index modifications and locality changes on [`REGIONAL BY ROW`](multiregion-overview.html#regional-by-row-tables) tables will be blocked. - -{% include enterprise-feature.md %} - -{{site.data.alerts.callout_info}} -`DROP REGION` is a subcommand of [`ALTER DATABASE`](alter-database.html). -{{site.data.alerts.end}} - -## Synopsis - -
    -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_database_drop_region.html %} -
    - -## Parameters - -| Parameter | Description | -|-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `database_name` | The database from which you are dropping a [region](multiregion-overview.html#database-regions). | -| `region_name` | The [region](multiregion-overview.html#database-regions) being dropped from this database. Allowed values include any region present in `SHOW REGIONS FROM DATABASE database_name`.
    You can only drop the primary region from a multi-region database if it's the last remaining region. | - -## Required privileges - -To drop a region from a database, the user must have one of the following: - -- Membership to the [`admin`](security-reference/authorization.html#roles) role for the cluster. -- Membership to the [owner](security-reference/authorization.html#object-ownership) role, or the [`CREATE` privilege](security-reference/authorization.html#supported-privileges), for the database and all [`REGIONAL BY ROW`](multiregion-overview.html#regional-by-row-tables) tables in the database. - -## Examples - -{% include {{page.version.version}}/sql/multiregion-example-setup.md %} - -### Set the primary region - -Suppose you have a database `foo` in your cluster, and you want to make it a multi-region database. - -To add the first region to the database, or to set an already-added region as the primary region, use a [`SET PRIMARY REGION`](set-primary-region.html) statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER DATABASE foo SET PRIMARY REGION "us-east1"; -~~~ - -~~~ -ALTER DATABASE PRIMARY REGION -~~~ - -### Add regions to a database - -To add more regions to a database that already has at least one region, use an [`ADD REGION`](add-region.html) statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER database foo ADD region "us-west1"; -~~~ - -~~~ -ALTER DATABASE ADD REGION -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER database foo ADD region "europe-west1"; -~~~ - -~~~ -ALTER DATABASE ADD REGION -~~~ - -### View a database's regions - -To view the regions associated with a multi-region database, use a [`SHOW REGIONS FROM DATABASE`](show-regions.html) statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -SHOW REGIONS FROM DATABASE foo; -~~~ - -~~~ - database | region | primary | zones ------------+--------------+---------+---------- - foo | us-east1 | true | {b,c,d} - foo | europe-west1 | false | {b,c,d} - foo | us-west1 | false | {a,b,c} -(3 rows) -~~~ - -### Drop regions from a database - -To drop a region from a multi-region database, use a `DROP REGION` statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER DATABASE foo DROP REGION "us-west1"; -~~~ - -~~~ -ALTER DATABASE DROP REGION -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -SHOW REGIONS FROM DATABASE foo; -~~~ - -~~~ - database | region | primary | zones ------------+--------------+---------+---------- - foo | us-east1 | true | {b,c,d} - foo | europe-west1 | false | {b,c,d} -(2 rows) -~~~ - -You can only drop the primary region from a multi-region database if it's the last remaining region. - -If you try to drop the primary region when there is more than one region, CockroachDB will return an error: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER DATABASE foo DROP REGION "us-east1"; -~~~ - -~~~ -ERROR: cannot drop region "us-east1" -SQLSTATE: 42P12 -HINT: You must designate another region as the primary region using ALTER DATABASE foo PRIMARY REGION or remove all other regions before attempting to drop region "us-east1" -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER DATABASE foo DROP REGION "europe-west1"; -~~~ - -~~~ -ALTER DATABASE DROP REGION -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -SHOW REGIONS FROM DATABASE foo; -~~~ - -~~~ - database | region | primary | zones ------------+----------+---------+---------- - foo | us-east1 | true | {b,c,d} -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER DATABASE foo DROP REGION "us-east1"; -~~~ - -~~~ -ALTER DATABASE DROP REGION -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -SHOW REGIONS FROM DATABASE foo; -~~~ - -~~~ - database | region | primary | zones ------------+--------+---------+-------- -(0 rows) -~~~ - -You cannot drop a region from a database if the databases uses [`REGION` survival goal](multiregion-overview.html#surviving-region-failures) and there are only three regions configured on the database: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER DATABASE foo SET PRIMARY REGION "us-east1"; -~~~ - -~~~ -ALTER DATABASE PRIMARY REGION -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER DATABASE foo ADD REGION "us-west1"; -~~~ - -~~~ -ALTER DATABASE ADD REGION -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER DATABASE foo ADD REGION "europe-west1"; -~~~ - -~~~ -ALTER DATABASE ADD REGION -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER DATABASE foo DROP REGION "us-west1"; -~~~ - -~~~ -ERROR: at least 3 regions are required for surviving a region failure -SQLSTATE: 22023 -HINT: you must add additional regions to the database or change the survivability goal -~~~ - -## See also - -- [Multi-Region Capabilities Overview](multiregion-overview.html) -- [`SET PRIMARY REGION`](set-primary-region.html) -- [`ADD REGION`](add-region.html) -- [`SHOW REGIONS`](show-regions.html) -- [`ADD SUPER REGION`](add-super-region.html) -- [`DROP SUPER REGION`](drop-super-region.html) -- [`SHOW SUPER REGIONS`](show-super-regions.html) -- [`ALTER TABLE`](alter-table.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v22.1/drop-role.md b/src/current/v22.1/drop-role.md deleted file mode 100644 index 559c41946f3..00000000000 --- a/src/current/v22.1/drop-role.md +++ /dev/null @@ -1,73 +0,0 @@ ---- -title: DROP ROLE -summary: The DROP ROLE statement removes one or more SQL roles. -toc: true -docs_area: reference.sql ---- - -The `DROP ROLE` [statement](sql-statements.html) removes one or more SQL roles. - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -{{site.data.alerts.callout_info}} - DROP ROLE is no longer an Enterprise feature and is now freely available in the core version of CockroachDB. Also, since the keywords `ROLE` and `USER` can now be used interchangeably in SQL statements for enhanced PostgreSQL compatibility, `DROP ROLE` is now an alias for [`DROP USER`](drop-user.html). -{{site.data.alerts.end}} - -## Considerations - -- The `admin` role cannot be dropped, and `root` must always be a member of `admin`. -- A role cannot be dropped if it has privileges. Use [`REVOKE`](revoke.html) to remove privileges. -- Roles that [own objects](security-reference/authorization.html#object-ownership) (such as databases, tables, schemas, and types) cannot be dropped until the [ownership is transferred to another role](owner-to.html#change-a-databases-owner). - -## Required privileges - -Non-admin roles cannot drop admin roles. To drop non-admin roles, the role must be a member of the `admin` role or have the [`CREATEROLE`](create-role.html#create-a-role-that-can-create-other-roles-and-manage-authentication-methods-for-the-new-roles) parameter set. - -## Synopsis - -
    {% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/drop_role.html %}
    - -## Parameters - - Parameter | Description -------------|-------------- -`name` | The name of the role to remove. To remove multiple roles, use a comma-separate list of roles.

    You can use [`SHOW ROLES`](show-roles.html) to find the names of roles. - -## Example - -In this example, first check a role's privileges. Then, revoke the role's privileges and remove the role. - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON documents FOR dev_ops; -~~~ -~~~ -+------------+--------+-----------+---------+------------+ -| Database | Schema | Table | User | Privileges | -+------------+--------+-----------+---------+------------+ -| jsonb_test | public | documents | dev_ops | INSERT | -+------------+--------+-----------+---------+------------+ -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> REVOKE INSERT ON documents FROM dev_ops; -~~~ - -{{site.data.alerts.callout_info}}All of a role's privileges must be revoked before the role can be dropped.{{site.data.alerts.end}} - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP ROLE dev_ops; -~~~ - -## See also - -- [Authorization](authorization.html) -- [Authorization Best Practices](security-reference/authorization.html#authorization-best-practices) -- [`CREATE ROLE`](create-role.html) -- [`SHOW ROLES`](show-roles.html) -- [`GRANT`](grant.html) -- [`SHOW GRANTS`](show-grants.html) -- [SQL Statements](sql-statements.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v22.1/drop-schedules.md b/src/current/v22.1/drop-schedules.md deleted file mode 100644 index 1c9e3fed210..00000000000 --- a/src/current/v22.1/drop-schedules.md +++ /dev/null @@ -1,75 +0,0 @@ ---- -title: DROP SCHEDULES -summary: The DROP SCHEDULES statement lets you remove specified backup schedules. -toc: true -docs_area: reference.sql ---- - - The `DROP SCHEDULES` [statement](sql-statements.html) can be used to remove [backup schedules](create-schedule-for-backup.html). - -{{site.data.alerts.callout_danger}} -`DROP SCHEDULE` does **not** cancel any in-progress jobs started by the schedule. Before you drop a schedule, [cancel any in-progress jobs](cancel-job.html) first, as you will not be able to look up the job ID once the schedule is dropped. -{{site.data.alerts.end}} - -## Required privileges - -Only members of the [`admin` role](security-reference/authorization.html#default-roles) can drop a schedule. By default, the `root` user belongs to the `admin` role. - -## Synopsis - -~~~ -DROP SCHEDULES - select clause: select statement returning schedule id to pause. -DROP SCHEDULE -~~~ - -## Parameters - - Parameter | Description ----------------+------------ -`selectclause` | A [selection query](selection-queries.html) that returns `id`(s) to drop. -`scheduleID` | The `id` of the schedule you want to drop, which can be found with [`SHOW SCHEDULES`](show-schedules.html). - -## Examples - -### Drop a schedule - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP SCHEDULE 589963390487363585; -~~~ - -~~~ -DROP SCHEDULES 1 -~~~ - -### Drop multiple schedules - -To drop multiple schedules, nest a [`SELECT` clause](select-clause.html) that retrieves `id`(s) inside the `DROP SCHEDULES` statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP SCHEDULES WITH x AS (SHOW SCHEDULES) SELECT id FROM x WHERE label = 'schedule_database'; -~~~ - -~~~ -DROP SCHEDULES 4 -~~~ - -In this example, all schedules with the label `schedule_database` are dropped. - -## See also - -- [Manage a Backup Schedule](manage-a-backup-schedule.html) -- [`BACKUP`](backup.html) -- [`RESTORE`](restore.html) -- [`SHOW BACKUP`](show-backup.html) -- [`SHOW SCHEDULES`](show-schedules.html) -- [`PAUSE SCHEDULES`](pause-schedules.html) -- [`RESUME SCHEDULES`](resume-schedules.html) -- [`PAUSE JOB`](pause-job.html) -- [`RESUME JOB`](pause-job.html) -- [`CANCEL JOB`](cancel-job.html) -- [Take Full and Incremental Backups](take-full-and-incremental-backups.html) -- [Use the Built-in SQL Client](cockroach-sql.html) -- [`cockroach` Commands Overview](cockroach-commands.html) diff --git a/src/current/v22.1/drop-schema.md b/src/current/v22.1/drop-schema.md deleted file mode 100644 index 832fb5abf0c..00000000000 --- a/src/current/v22.1/drop-schema.md +++ /dev/null @@ -1,167 +0,0 @@ ---- -title: DROP SCHEMA -summary: The DROP SCHEMA statement removes a schema and all its objects from a CockroachDB cluster. -toc: true -docs_area: reference.sql ---- - -The `DROP SCHEMA` [statement](sql-statements.html) removes a user-defined [schema](sql-name-resolution.html#naming-hierarchy). - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -{% include {{ page.version.version }}/misc/declarative-schema-changer-note.md %} - -## Required privileges - -The user must have the `DROP` [privilege](security-reference/authorization.html#managing-privileges) on the schema and on all tables in the schema. If the user is the owner of the schema, `DROP` privileges are not necessary. - -## Syntax - -
    -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/drop_schema.html %} -
    - -### Parameters - -Parameter | Description -----------|------------ -`IF EXISTS` | Drop the schema if it exists. If it does not exist, do not return an error. -`schema_name_list` | The schema, or a list of schemas, that you want to drop.
    To drop a schema in a database other than the current database, specify the name of the database and the name of the schema, separated by a "`.`" (e.g., `DROP SCHEMA IF EXISTS database.schema;`). -`CASCADE` | Drop all tables and views in the schema as well as all objects (such as [constraints](constraints.html) and [views](views.html)) that depend on those tables.

    `CASCADE` does not list objects it drops, so should be used cautiously. -`RESTRICT` | _(Default)_ Do not drop the schema if it contains any [tables](create-table.html) or [views](create-view.html). - -## Examples - -{% include {{page.version.version}}/sql/movr-statements.md %} - -### Drop a schema - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE SCHEMA org_one; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW SCHEMAS; -~~~ - -~~~ - schema_name ----------------------- - crdb_internal - information_schema - org_one - pg_catalog - pg_extension - public -(6 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP SCHEMA org_one; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW SCHEMAS; -~~~ - -~~~ - schema_name ----------------------- - crdb_internal - information_schema - pg_catalog - pg_extension - public -(5 rows) -~~~ - -### Drop a schema with tables - -To drop a schema that contains tables, you need to use the `CASCADE` keyword. - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE SCHEMA org_two; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW SCHEMAS; -~~~ - -~~~ - schema_name ----------------------- - crdb_internal - information_schema - org_two - pg_catalog - pg_extension - public -(6 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE org_two.users ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - city STRING, - name STRING, - address STRING -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM org_two; -~~~ - -~~~ - schema_name | table_name | type | estimated_row_count ---------------+------------+-------+---------------------- - org_two | users | table | 0 -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP SCHEMA org_two; -~~~ - -~~~ -ERROR: schema "org_two" is not empty and CASCADE was not specified -SQLSTATE: 2BP01 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP SCHEMA org_two CASCADE; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW SCHEMAS; -~~~ - -~~~ - schema_name ----------------------- - crdb_internal - information_schema - pg_catalog - pg_extension - public -(5 rows) -~~~ - -## See also - -- [`CREATE SCHEMA`](create-schema.html) -- [`SHOW SCHEMAS`](show-schemas.html) -- [`SHOW JOBS`](show-jobs.html) -- [SQL Statements](sql-statements.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v22.1/drop-sequence.md b/src/current/v22.1/drop-sequence.md deleted file mode 100644 index afadfe5ab36..00000000000 --- a/src/current/v22.1/drop-sequence.md +++ /dev/null @@ -1,95 +0,0 @@ ---- -title: DROP SEQUENCE -summary: The DROP SEQUENCE statement removes a sequence from a database. -toc: true -docs_area: reference.sql ---- - -The `DROP SEQUENCE` [statement](sql-statements.html) removes a sequence from a database. - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Required privileges - -The user must have the `DROP` [privilege](security-reference/authorization.html#managing-privileges) on the specified sequence(s). - -## Synopsis - -
    {% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/drop_sequence.html %}
    - -## Parameters - - - - Parameter | Description ------------|------------ -`IF EXISTS` | Drop the sequence only if it exists; if it does not exist, do not return an error. -`sequence_name` | The name of the sequence you want to drop. Find the sequence name with `SHOW CREATE` on the table that uses the sequence. -`RESTRICT` | _(Default)_ Do not drop the sequence if any objects (such as [constraints](constraints.html) and tables) use it. -`CASCADE` | Not yet implemented. Currently, you can only drop a sequence if nothing depends on it. - - - -## Examples - -### Remove a sequence (no dependencies) - -In this example, other objects do not depend on the sequence being dropped. - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE SEQUENCE even_numbers INCREMENT 2 START 2; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW SEQUENCES; -~~~ - -~~~ - sequence_schema | sequence_name -------------------+---------------- - public | even_numbers -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP SEQUENCE even_numbers; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW SEQUENCES; -~~~ - -~~~ - sequence_schema | sequence_name -------------------+---------------- -(0 rows) -~~~ - - - -## See also -- [`CREATE SEQUENCE`](create-sequence.html) -- [`ALTER SEQUENCE`](alter-sequence.html) -- [`SHOW SEQUENCES`](show-sequences.html) -- [Functions and Operators](functions-and-operators.html) -- [SQL Statements](sql-statements.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v22.1/drop-super-region.md b/src/current/v22.1/drop-super-region.md deleted file mode 100644 index e31c54e3d92..00000000000 --- a/src/current/v22.1/drop-super-region.md +++ /dev/null @@ -1,106 +0,0 @@ ---- -title: DROP SUPER REGION -summary: The DROP SUPER REGION statement drops a super region from a multi-region database. -toc: true -docs_area: reference.sql ---- - - The `ALTER DATABASE .. DROP SUPER REGION` [statement](sql-statements.html) drops a [super region](multiregion-overview.html#super-regions) from a [multi-region database](multiregion-overview.html). - -{% include enterprise-feature.md %} - -{{site.data.alerts.callout_info}} -`DROP SUPER REGION` is a subcommand of [`ALTER DATABASE`](alter-database.html). -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -{% include feature-phases/preview.md %} -{{site.data.alerts.end}} - -## Synopsis - -
    -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/alter_database_drop_super_region.html %} -
    - -## Parameters - -| Parameter | Description | -|-----------------+-----------------------------------------------------------------------------------------------------------| -| `database_name` | The database from which you are dropping a [super region](multiregion-overview.html#super-regions). | -| `name` | The name of the [super region](multiregion-overview.html#super-regions) being dropped from this database. | - -## Required privileges - -To drop a super region from a database, the user must have one of the following: - -- Membership to the [`admin`](security-reference/authorization.html#roles) role for the cluster. -- Either [ownership](security-reference/authorization.html#object-ownership) or the [`CREATE` privilege](security-reference/authorization.html#supported-privileges) for the database. - -## Considerations - -{% include {{page.version.version}}/sql/super-region-considerations.md %} - -## Examples - -The examples in this section use the following setup. - -{% include {{page.version.version}}/sql/multiregion-example-setup.md %} - -#### Set up movr database regions - -{% include {{page.version.version}}/sql/multiregion-movr-add-regions.md %} - -#### Set up movr global tables - -{% include {{page.version.version}}/sql/multiregion-movr-global.md %} - -#### Set up movr regional tables - -{% include {{page.version.version}}/sql/multiregion-movr-regional-by-row.md %} - -### Enable super regions - -{% include {{page.version.version}}/sql/enable-super-regions.md %} - -### Drop a super region from a database - -To drop a super region from a multi-region database, use a [`DROP SUPER REGION`](drop-super-region.html) statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER DATABASE movr DROP SUPER REGION "usa"; -~~~ - -~~~ -ALTER DATABASE DROP SUPER REGION -~~~ - -Note that you cannot [drop a region](drop-region.html) that is part of a super region until you either [alter the super region](alter-super-region.html) to remove it, or [drop the super region](drop-super-region.html) altogether. - -For example, using the super region that was added in [`ADD SUPER REGION`](add-super-region.html#add-a-super-region-to-a-database): - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER DATABASE movr DROP REGION "us-west1"; -~~~ - -~~~ -ERROR: region us-west1 is part of super region usa -SQLSTATE: 2BP01 -HINT: you must first drop super region usa before you can drop the region us-west1 -~~~ - -## See also - -- [Multi-Region Capabilities Overview](multiregion-overview.html) -- [Super regions](multiregion-overview.html#super-regions) -- [`SET PRIMARY REGION`](set-primary-region.html) -- [`ADD SUPER REGION`](add-super-region.html) -- [`ALTER SUPER REGION`](alter-super-region.html) -- [`SHOW SUPER REGIONS`](show-super-regions.html) -- [`ADD REGION`](add-region.html) -- [`SHOW REGIONS`](show-regions.html) -- [`ALTER TABLE`](alter-table.html) -- [`ALTER DATABASE`](alter-database.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v22.1/drop-table.md b/src/current/v22.1/drop-table.md deleted file mode 100644 index 4f9094f2ab7..00000000000 --- a/src/current/v22.1/drop-table.md +++ /dev/null @@ -1,207 +0,0 @@ ---- -title: DROP TABLE -summary: The DROP TABLE statement removes a table and all its indexes from a database. -toc: true -docs_area: reference.sql ---- - -The `DROP TABLE` [statement](sql-statements.html) removes a table and all its indexes from a database. - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -{% include {{ page.version.version }}/misc/declarative-schema-changer-note.md %} - -## Required privileges - -The user must have the `DROP` [privilege](security-reference/authorization.html#managing-privileges) on the specified table(s). If `CASCADE` is used, the user must have the privileges required to drop each dependent object as well. - -## Synopsis - -
    {% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/drop_table.html %}
    - -## Parameters - -Parameter | Description -----------|------------ -`IF EXISTS` | Drop the table if it exists; if it does not exist, do not return an error. -`table_name` | A comma-separated list of table names. To find table names, use [`SHOW TABLES`](show-tables.html). -`CASCADE` | Drop all objects (such as [constraints](constraints.html) and [views](views.html)) that depend on the table.

    `CASCADE` does not list objects it drops, so should be used cautiously. -`RESTRICT` | _(Default)_ Do not drop the table if any objects (such as [constraints](constraints.html) and [views](views.html)) depend on it. - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -{% include {{page.version.version}}/sql/movr-statements.md %} - -### Remove a table (no dependencies) - -In this example, other objects do not depend on the table being dropped. - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM movr; -~~~ - -~~~ - schema_name | table_name | type | estimated_row_count ---------------+----------------------------+-------+---------------------- - public | promo_codes | table | 1000 - public | rides | table | 500 - public | user_promo_codes | table | 0 - public | users | table | 50 - public | vehicle_location_histories | table | 1000 - public | vehicles | table | 15 -(6 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP TABLE promo_codes; -~~~ - -~~~ -DROP TABLE -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM movr; -~~~ - -~~~ - schema_name | table_name | type | estimated_row_count ---------------+----------------------------+-------+---------------------- - public | rides | table | 500 - public | user_promo_codes | table | 0 - public | users | table | 50 - public | vehicle_location_histories | table | 1000 - public | vehicles | table | 15 -(5 rows) -~~~ - -### Remove a table and dependent objects with `CASCADE` - -In this example, a [foreign key](foreign-key.html) from a different table references the table being dropped. Therefore, it's only possible to drop the table while simultaneously dropping the dependent foreign key constraint using `CASCADE`. - -{{site.data.alerts.callout_danger}}CASCADE drops all dependent objects without listing them, which can lead to inadvertent and difficult-to-recover losses. To avoid potential harm, we recommend dropping objects individually in most cases.{{site.data.alerts.end}} - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM movr; -~~~ - -~~~ - schema_name | table_name | type | estimated_row_count ---------------+----------------------------+-------+---------------------- - public | rides | table | 500 - public | user_promo_codes | table | 0 - public | users | table | 50 - public | vehicle_location_histories | table | 1000 - public | vehicles | table | 15 -(5 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP TABLE users; -~~~ - -~~~ -pq: "users" is referenced by foreign key from table "vehicles" -~~~ - -To see how `users` is referenced from `vehicles`, you can use the [`SHOW CREATE`](show-create.html) statement. `SHOW CREATE` shows how the columns in a table are created, including data types, default values, indexes, and constraints. - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE vehicles; -~~~ - -~~~ - table_name | create_statement --------------+--------------------------------------------------------------------------------------------------- - vehicles | CREATE TABLE public.vehicles ( - | id UUID NOT NULL, - | city VARCHAR NOT NULL, - | type VARCHAR NULL, - | owner_id UUID NULL, - | creation_time TIMESTAMP NULL, - | status VARCHAR NULL, - | current_location VARCHAR NULL, - | ext JSONB NULL, - | CONSTRAINT vehicles_pkey PRIMARY KEY (city ASC, id ASC), - | CONSTRAINT fk_city_ref_users FOREIGN KEY (city, owner_id) REFERENCES public.users(city, id), - | INDEX vehicles_auto_index_fk_city_ref_users (city ASC, owner_id ASC), - | FAMILY "primary" (id, city, type, owner_id, creation_time, status, current_location, ext) - | ) -(1 row) -~~~ - - -{% include_cached copy-clipboard.html %} -~~~sql -> DROP TABLE users CASCADE; -~~~ - -~~~ -DROP TABLE -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM movr; -~~~ - -~~~ - schema_name | table_name | type | estimated_row_count ---------------+----------------------------+-------+---------------------- - public | rides | table | 500 - public | user_promo_codes | table | 0 - public | vehicle_location_histories | table | 1000 - public | vehicles | table | 15 -(4 rows) -~~~ - -Use a `SHOW CREATE TABLE` statement to verify that the foreign key constraint has been removed from `vehicles`. - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE vehicles; -~~~ - -~~~ - table_name | create_statement --------------+------------------------------------------------------------------------------------------------ - vehicles | CREATE TABLE public.vehicles ( - | id UUID NOT NULL, - | city VARCHAR NOT NULL, - | type VARCHAR NULL, - | owner_id UUID NULL, - | creation_time TIMESTAMP NULL, - | status VARCHAR NULL, - | current_location VARCHAR NULL, - | ext JSONB NULL, - | CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - | INDEX vehicles_auto_index_fk_city_ref_users (city ASC, owner_id ASC), - | FAMILY "primary" (id, city, type, owner_id, creation_time, status, current_location, ext) - | ) -(1 row) -~~~ - -## See also - -- [`ALTER TABLE`](alter-table.html) -- [`CREATE TABLE`](create-table.html) -- [`INSERT`](insert.html) -- [`RENAME TABLE`](rename-table.html) -- [`SHOW COLUMNS`](show-columns.html) -- [`SHOW TABLES`](show-tables.html) -- [`UPDATE`](update.html) -- [`DELETE`](delete.html) -- [`DROP INDEX`](drop-index.html) -- [`DROP VIEW`](drop-view.html) -- [`SHOW JOBS`](show-jobs.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v22.1/drop-type.md b/src/current/v22.1/drop-type.md deleted file mode 100644 index 867f134afc9..00000000000 --- a/src/current/v22.1/drop-type.md +++ /dev/null @@ -1,145 +0,0 @@ ---- -title: DROP TYPE -summary: The DROP TYPE statement drops an enumerated data type from the database. -toc: true -docs_area: reference.sql ---- - -The `DROP TYPE` [statement](sql-statements.html) drops a specified [enumerated data type](enum.html) from the current database. - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -{% include {{ page.version.version }}/misc/declarative-schema-changer-note.md %} - -## Synopsis - -
    -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/drop_type.html %} -
    - -## Parameters - -Parameter | Description -----------|------------ -`IF EXISTS` | Drop the type if it exists. If it does not exist, do not return an error. -`type_name_list` | A type name or a comma-separated list of type names to drop. - -## Required privileges - -The user must be the owner of the type. - -## Details - -- You cannot drop a type or view that is in use by a table. -- You can only drop a user-defined type from the database that contains the type. - -## Example - -### Drop a single type - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TYPE status AS ENUM ('open', 'closed', 'inactive'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW ENUMS; -~~~ - -~~~ - schema | name | value ----------+--------+----------------------- - public | status | open|closed|inactive -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE accounts ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - balance DECIMAL, - status status -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP TYPE status; -~~~ - -~~~ -ERROR: cannot drop type "status" because other objects ([bank.public.accounts]) still depend on it -SQLSTATE: 2BP01 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP TABLE accounts; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP TYPE status; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW ENUMS; -~~~ - -~~~ - schema | name | value ----------+------+-------- -(0 rows) -~~~ - -### Drop multiple types - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TYPE weekday AS ENUM ('monday', 'tuesday', 'wednesday', 'thursday', 'friday'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TYPE weekend AS ENUM ('sunday', 'saturday'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW ENUMS; -~~~ - -~~~ - schema | name | value ----------+---------+------------------------------------------- - public | weekday | monday|tuesday|wednesday|thursday|friday - public | weekend | sunday|saturday -(2 rows) -~~~ - - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP TYPE weekday, weekend; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW ENUMS; -~~~ - -~~~ - schema | name | value ----------+------+-------- -(0 rows) -~~~ - -## See also - -- [`ENUM`](enum.html) -- [Data types](data-types.html) -- [`CREATE TYPE`](create-type.html) -- [`ALTER TYPE`](alter-type.html) -- [`SHOW ENUMS`](show-enums.html) diff --git a/src/current/v22.1/drop-user.md b/src/current/v22.1/drop-user.md deleted file mode 100644 index 0b464d1f6ca..00000000000 --- a/src/current/v22.1/drop-user.md +++ /dev/null @@ -1,99 +0,0 @@ ---- -title: DROP USER -summary: The DROP USER statement removes one or more SQL users. -toc: true -docs_area: reference.sql ---- - -The `DROP USER` [statement](sql-statements.html) removes one or more SQL users. - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -{{site.data.alerts.callout_info}} - Since the keywords `ROLE` and `USER` can now be used interchangeably in SQL statements for enhanced PostgreSQL compatibility, `DROP USER` is now an alias for [`DROP ROLE`](drop-role.html). -{{site.data.alerts.end}} - -## Consideration - -Users that [own objects](security-reference/authorization.html#object-ownership) (such as databases, tables, schemas, and types) cannot be dropped until the [ownership is transferred to another user](owner-to.html#change-a-databases-owner). - -## Required privileges - -Non-admin users cannot drop admin users. To drop non-admin users, the user must be a member of the `admin` role or have the [`CREATEROLE`](create-user.html#create-a-user-that-can-create-other-users-and-manage-authentication-methods-for-the-new-users) parameter set. - -## Synopsis - -
    {% include {{ page.version.version }}/sql/generated/diagrams/drop_user.html %}
    - -## Parameters - - Parameter | Description ------------|------------- -`user_name` | The username of the user to remove. To remove multiple users, use a comma-separate list of usernames.

    You can use [`SHOW USERS`](show-users.html) to find usernames. - -## Example - -### Remove privileges - -All of a user's privileges must be revoked before the user can be dropped. - -In this example, first check a user's privileges. Then, revoke the user's privileges before removing the user. - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON test.customers FOR mroach; -~~~ - -~~~ -+-----------+--------+------------+ -| Table | User | Privileges | -+-----------+--------+------------+ -| customers | mroach | CREATE | -| customers | mroach | INSERT | -| customers | mroach | UPDATE | -+-----------+--------+------------+ -(3 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> REVOKE CREATE,INSERT,UPDATE ON test.customers FROM mroach; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP USER mroach; -~~~ - -### Remove default privileges - -In addition to removing a user's privileges, a user's [default privileges](security-reference/authorization.html#default-privileges) must be removed prior to dropping the user. If you attempt to drop a user with modified default privileges, you will encounter an error like the following: - -~~~ -ERROR: role mroach cannot be dropped because some objects depend on it -privileges for default privileges on new relations belonging to role demo in database movr -SQLSTATE: 2BP01 -HINT: USE test; ALTER DEFAULT PRIVILEGES REVOKE ALL ON TABLES FROM mroach; -~~~ - -Run the `HINT` SQL prior to dropping the user. - -{% include_cached copy-clipboard.html %} -~~~ sql -USE test; ALTER DEFAULT PRIVILEGES REVOKE ALL ON TABLES FROM mroach; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP USER mroach; -~~~ - -## See also - -- [`CREATE USER`](create-user.html) -- [`ALTER USER`](alter-user.html) -- [`SHOW USERS`](show-users.html) -- [`GRANT`](grant.html) -- [`SHOW GRANTS`](show-grants.html) -- [Create Security Certificates](cockroach-cert.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v22.1/drop-view.md b/src/current/v22.1/drop-view.md deleted file mode 100644 index d302808f0ee..00000000000 --- a/src/current/v22.1/drop-view.md +++ /dev/null @@ -1,133 +0,0 @@ ---- -title: DROP VIEW -summary: The DROP VIEW statement removes a view from a database. -toc: true -docs_area: reference.sql ---- - -The `DROP VIEW` [statement](sql-statements.html) removes a [view](views.html) from a database. - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Required privileges - -The user must have the `DROP` [privilege](security-reference/authorization.html#managing-privileges) on the specified view(s). If `CASCADE` is used to drop dependent views, the user must have the `DROP` privilege on each dependent view as well. - -## Synopsis - -
    {% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/drop_view.html %}
    - -## Parameters - - Parameter | Description -----------|------------- -`MATERIALIZED` | Drop a [materialized view](views.html#materialized-views). - `IF EXISTS` | Drop the view if it exists; if it does not exist, do not return an error. - `table_name` | A comma-separated list of view names. To find view names, use:

    `SELECT * FROM information_schema.tables WHERE table_type = 'VIEW';` - `CASCADE` | Drop other views that depend on the view being dropped.

    `CASCADE` does not list views it drops, so should be used cautiously. - `RESTRICT` | _(Default)_ Do not drop the view if other views depend on it. - -## Examples - -### Remove a view (no dependencies) - -In this example, other views do not depend on the view being dropped. - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.tables WHERE table_type = 'VIEW'; -~~~ - -~~~ -+---------------+-------------------+--------------------+------------+---------+ -| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | VERSION | -+---------------+-------------------+--------------------+------------+---------+ -| def | bank | user_accounts | VIEW | 1 | -| def | bank | user_emails | VIEW | 1 | -+---------------+-------------------+--------------------+------------+---------+ -(2 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP VIEW bank.user_emails; -~~~ - -~~~ -DROP VIEW -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.tables WHERE table_type = 'VIEW'; -~~~ - -~~~ -+---------------+-------------------+--------------------+------------+---------+ -| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | VERSION | -+---------------+-------------------+--------------------+------------+---------+ -| def | bank | user_accounts | VIEW | 1 | -+---------------+-------------------+--------------------+------------+---------+ -(1 row) -~~~ - -### Remove a view (with dependencies) - -In this example, another view depends on the view being dropped. Therefore, it's only possible to drop the view while simultaneously dropping the dependent view using `CASCADE`. - -{{site.data.alerts.callout_danger}}CASCADE drops all dependent views without listing them, which can lead to inadvertent and difficult-to-recover losses. To avoid potential harm, we recommend dropping objects individually in most cases.{{site.data.alerts.end}} - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.tables WHERE table_type = 'VIEW'; -~~~ - -~~~ -+---------------+-------------------+--------------------+------------+---------+ -| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | VERSION | -+---------------+-------------------+--------------------+------------+---------+ -| def | bank | user_accounts | VIEW | 1 | -| def | bank | user_emails | VIEW | 1 | -+---------------+-------------------+--------------------+------------+---------+ -(2 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP VIEW bank.user_accounts; -~~~ - -~~~ -pq: cannot drop view "user_accounts" because view "user_emails" depends on it -~~~ - -{% include_cached copy-clipboard.html %} -~~~sql -> DROP VIEW bank.user_accounts CASCADE; -~~~ - -~~~ -DROP VIEW -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.tables WHERE table_type = 'VIEW'; -~~~ - -~~~ -+---------------+-------------------+--------------------+------------+---------+ -| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | VERSION | -+---------------+-------------------+--------------------+------------+---------+ -| def | bank | create_test | VIEW | 1 | -+---------------+-------------------+--------------------+------------+---------+ -(1 row) -~~~ - -## See also - -- [Views](views.html) -- [`CREATE VIEW`](create-view.html) -- [`SHOW CREATE`](show-create.html) -- [`ALTER VIEW`](alter-view.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v22.1/enable-node-map.md b/src/current/v22.1/enable-node-map.md deleted file mode 100644 index a7cc0bb7736..00000000000 --- a/src/current/v22.1/enable-node-map.md +++ /dev/null @@ -1,203 +0,0 @@ ---- -title: Enable the Node Map -summary: Learn how to enable the node map in the DB Console. -toc: true -docs_area: manage ---- - -{% include {{ page.version.version }}/ui/admin-access.md %} - -The **Node Map** is useful for: - -- Visualizing the geographic configuration of a multi-region cluster on a world map. -- Viewing real-time cluster metrics. -- Drilling down to individual nodes for monitoring health and performance. - -This page guides you through the process of setting up and enabling the Node Map. - -{% include enterprise-feature.md %} - -DB Console - -## Set up and enable the Node Map - -To enable the Node Map, you need to start the cluster with the correct [`--locality`](cockroach-start.html#locality) flags and assign the latitude and longitude for each locality. - -{{site.data.alerts.callout_info}} -The Node Map will not be displayed until *all* nodes are started with the correct `--locality` flags and all localities are assigned the corresponding latitude and longitude. -{{site.data.alerts.end}} - -Consider a four-node geo-distributed cluster with the following configuration: - -| Node | Region | Datacenter | -| ------ | ------ | ------ | -| Node1 | us-east-1 | us-east-1a | -| Node2 | us-east-1 | us-east-1b | -| Node3 | us-west-1 | us-west-1a | -| Node4 | eu-west-1 | eu-west-1a | - -### Step 1. Start the nodes with the correct `--locality` flags - -To start a new cluster with the correct `--locality` flags: - -1. Start Node 1: - - {% include_cached copy-clipboard.html %} - ~~~ - $ cockroach start \ - --insecure \ - --locality=region=us-east-1,datacenter=us-east-1a \ - --advertise-addr= \ - --cache=.25 \ - --max-sql-memory=.25 \ - --join=,,, - ~~~ - -1. Start Node 2: - - {% include_cached copy-clipboard.html %} - ~~~ - $ cockroach start \ - --insecure \ - --locality=region=us-east-1,datacenter=us-east-1b \ - --advertise-addr= \ - --cache=.25 \ - --max-sql-memory=.25 \ - --join=,,, - ~~~ - -1. Start Node 3: - - {% include_cached copy-clipboard.html %} - ~~~ - $ cockroach start \ - --insecure \ - --locality=region=us-west-1,datacenter=us-west-1a \ - --advertise-addr= \ - --cache=.25 \ - --max-sql-memory=.25 \ - --join=,,, - ~~~ - -1. Start Node 4: - - {% include_cached copy-clipboard.html %} - ~~~ - $ cockroach start \ - --insecure \ - --locality=region=eu-west-1,datacenter=eu-west-1a \ - --advertise-addr= \ - --cache=.25 \ - --max-sql-memory=.25 \ - --join=,,, - ~~~ - -1. Use the [`cockroach init`](cockroach-init.html) command to perform a one-time initialization of the cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach init --insecure --host=
    - ~~~ - -1. [Access the DB Console](ui-overview.html#db-console-access). - -1. If the node list displays, click the selector Node map selector and select **Node Map**. - -The following page is displayed: - -DB Console - -### Step 2. Set the Enterprise license and refresh the DB Console - -After [setting the Enterprise license](enterprise-licensing.html), the Node Map should now be displaying the highest-level localities you defined: - -DB Console - -{{site.data.alerts.callout_info}} -To be displayed on the world map, localities must be assigned a corresponding latitude and longitude. -{{site.data.alerts.end}} - -### Step 3. Set the latitudes and longitudes for the localities - -1. Launch the built-in SQL client: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=
    - ~~~ - -1. Insert the approximate latitude and longitude of each region into the `system.locations` table: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO system.locations VALUES - ('region', 'us-east-1', 37.478397, -76.453077), - ('region', 'us-west-1', 38.837522, -120.895824), - ('region', 'eu-west-1', 53.142367, -7.692054); - ~~~ - -For the latitudes and longitudes of AWS, Azure, and Google Cloud regions, see [Location Coordinates for Reference](#location-coordinates). - -### Step 4. Refresh the Node Map - -Refresh the DB Console to see the updated Node Map: - -DB Console - -### Step 5. Navigate the Node Map - -To navigate to Node 2, which is in datacenter `us-east-1a` in the `us-east-1` region: - -1. Click the map component marked as **region=us-east-1** on the Node Map. The [locality component](ui-cluster-overview-page.html#locality-component) for the datacenter is displayed. - - DB Console - -1. Click the datacenter component marked as **datacenter=us-east-1a**. The individual [node components](ui-cluster-overview-page.html#node-component) are displayed. - - DB Console - -1. To navigate back to the cluster view, either click **Cluster** in the breadcrumb trail at the top of the Node Map, or click **Up to REGION=US-EAST-1** and then click **Up to CLUSTER** in the lower left-hand side of the Node Map. - -## Troubleshoot the Node Map - -### Node Map not displayed - -- The Node Map requires an [Enterprise license](enterprise-licensing.html). -- All nodes in the cluster must be assigned [localities](cockroach-start.html#locality). -- Localities must be [assigned a corresponding latitude and longitude](#step-3-set-the-latitudes-and-longitudes-for-the-localities). - -To verify all requirements, navigate to the [**Localities**](ui-debug-pages.html#configuration) debug page in the DB Console. - -DB Console - -The **Localities** debug page displays the following: - -- Localities configuration that you set up while starting the nodes with the `--locality` flags. -- Nodes corresponding to each locality. -- Latitude and longitude coordinates for each locality. - -### World Map not displayed for all locality levels - -The world map is displayed only when [localities are assigned latitude and longitude coordinates](#step-3-set-the-latitudes-and-longitudes-for-the-localities). - -If a locality (e.g., region) is not assigned latitude and longitude coordinates, it is displayed using the latitude and longitude of any lower-level localities it contains (e.g., datacenter). If no coordinates are available, localities are plotted in a circular layout. - -### Displayed Used Capacity value is more than configured Capacity - -{% include {{ page.version.version }}/misc/available-capacity-metric.md %} - -## Location coordinates - -This section lists the latitudes and longitudes of AWS, Azure, and Google Cloud regions. - -### AWS locations - -{% include {{ page.version.version }}/misc/aws-locations.md %} - -### Azure locations - -{% include {{ page.version.version }}/misc/azure-locations.md %} - -### Google Cloud locations - -{% include {{ page.version.version }}/misc/gce-locations.md %} diff --git a/src/current/v22.1/encryption.md b/src/current/v22.1/encryption.md deleted file mode 100644 index fcb10a08423..00000000000 --- a/src/current/v22.1/encryption.md +++ /dev/null @@ -1,104 +0,0 @@ ---- -title: Managing Encryption for CockroachDB Self-Hosted -summary: Learn about the encryption features for secure CockroachDB clusters. -toc: true -docs_area: manage ---- - -This page outlines several procedures necessary for managing encryption in CockroachDB {{ site.data.products.core }} clusters. - -## Generating store key files - -Cockroach determines which encryption algorithm to use based on the size of the key file. The key file must contain random data making up the key ID (32 bytes) and the actual key (16, 24, or 32 bytes depending on the encryption algorithm). - -| Algorithm | Key size | Key file size | -|-|-|-| -| AES-128 | 128 bits (16 bytes) | 48 bytes | -| AES-192 | 192 bits (24 bytes) | 56 bytes | -| AES-256 | 256 bits (32 bytes) | 64 bytes | - -Generating a key file can be done using the `cockroach` CLI: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach gen encryption-key -s 128 /path/to/my/aes-128.key -~~~ - -Or the equivalent [openssl](https://www.openssl.org/docs/man1.1.1/man1/openssl.html) CLI command: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ openssl rand -out /path/to/my/aes-128.key 48 -~~~ - -## Starting a node with encryption - -Encryption at Rest is configured at node start time using the `--enterprise-encryption` command line flag. The flag specifies the encryption options for one of the stores on the node. If multiple stores exist, the flag must be specified for each store. - -The flag takes the form: `--enterprise-encryption=path=,key=,old-key=,rotation-period=`. - -The allowed components in the flag are: - -| Component | Requirement | Description | -|-|-|-| -| `path` | Required | Path of the store to apply encryption to. | -| `key` | Required | Path to the key file to encrypt data with, or `plain` for plaintext. | -| `old-key` | Required | Path to the key file the data is encrypted with, or `plain` for plaintext. | -| `rotation-period` | Optional | How often data keys should be automatically rotated. Default: one week. | - -The `key` and `old-key` components must **always** be specified. They allow for transitions between encryption algorithms, and between plaintext and encrypted. - -Starting a node for the first time using AES-128 encryption can be done using: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start --store=cockroach-data --enterprise-encryption=path=cockroach-data,key=/path/to/my/aes-128.key,old-key=plain -~~~ - -{{site.data.alerts.callout_danger}} -Once specified for a given store, the `--enterprise-encryption` flag must always be present. -{{site.data.alerts.end}} - -## Checking encryption status - -Encryption status can be seen on the node's stores report, reachable through: `http(s)://nodeaddress:8080/#/reports/stores/local` (or replace `local` with the node ID). For example, if you are running a [local cluster](secure-a-cluster.html), you can see the node's stores report at `https://localhost:8080/#/reports/stores/local`. - -The report shows encryption status for all stores on the selected node, including: - -- Encryption algorithm. -- Active store key information. -- Active data key information. -- The fraction of files/bytes encrypted using the active data key. - -CockroachDB relies on [storage layer compactions]({% link {{ page.version.version }}/architecture/storage-layer.md %}#compaction) to write new files using the latest encryption key. It may take several days for all files to be replaced. Some files are only rewritten at startup, and some keep older copies around, requiring multiple restarts. You can force storage compaction with the `cockroach debug compact` command (the node must first be [stopped]({% link {{ page.version.version }}/node-shutdown.md %}#perform-node-shutdown)). - -The fraction of files/bytes encrypted on the store may be less than 100% for the following reasons: - -- The percentage shown is the percentage encrypted with the **current** data key, which rotates at the configured [`rotation-period`](#starting-a-node-with-encryption). When a data key rotates, the percentage will drop down to zero and slowly climb up as data is compacted. -- In some cases, it may never reach 100%. This can happen because from the point in time at which encryption is enabled, CockroachDB only encrypts **new** data written to the filesystem. Because it relies entirely on storage layer compactions, there's no mechanism by which dormant on-disk data is encrypted. - -Information about keys is written to [the logs](logging-overview.html), including: - -- Active/old key information at startup. -- New key information after data key rotation. - -Alternatively, you can use the [`cockroach debug encryption-active-key`](cockroach-debug-encryption-active-key.html) command to view information about a store's encryption algorithm and store key. - -## Changing encryption algorithm or keys - -Encryption type and keys can be changed at any time by restarting the node. To change keys or encryption type, the `key` component of the `--enterprise-encryption` flag is set to the new key, while the key previously used must be specified in the `old-key` component. - -For example, we can switch from AES-128 to AES-256 using: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach start --store=cockroach-data --enterprise-encryption=path=cockroach-data,key=/path/to/my/aes-256.key,old-key=/path/to/my/aes-128.key -~~~ - -Upon starting, the node will read the existing data keys using the old encryption key (`aes-128.key`), then rewrite the data keys using the new key (`aes-256.key`). A new data key will be generated to match the desired AES-256 algorithm. - -To check that the new key is active, use the stores report page in the DB Console to [check the encryption status](#checking-encryption-status). - -To disable encryption, specify `key=plain`. The data keys will be stored in plaintext and new data will not be encrypted. - -To rotate keys, specify `key=/path/to/my/new-aes-128.key` and `old-key=/path/to/my/old-aes-128.key`. The data keys will be decrypted using the old key and then encrypted using the new key. A new data key will also be generated. diff --git a/src/current/v22.1/enterprise-licensing.md b/src/current/v22.1/enterprise-licensing.md deleted file mode 100644 index ebeb1bc52e1..00000000000 --- a/src/current/v22.1/enterprise-licensing.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: Enterprise Features -summary: Learn about CockroachDB features that require an Enterprise license key. -toc: true -docs_area: ---- - -CockroachDB distributes a single binary that contains both core and Enterprise features. You can use core features without any license key. However, to use the Enterprise features, you need either a trial or an Enterprise license key. - -This page lists Enterprise features. For information on how to obtain and set trial and Enterprise license keys for CockroachDB, see the [Licensing FAQs](licensing-faqs.html#obtain-a-license). - -{% include {{ page.version.version }}/misc/enterprise-features.md %} - -## See also - -- [`SET CLUSTER SETTING`](set-cluster-setting.html) -- [`SHOW CLUSTER SETTING`](show-cluster-setting.html) -- [Enterprise Trial –– Get Started](get-started-with-enterprise-trial.html) diff --git a/src/current/v22.1/enum.md b/src/current/v22.1/enum.md deleted file mode 100644 index de5632e1d2e..00000000000 --- a/src/current/v22.1/enum.md +++ /dev/null @@ -1,218 +0,0 @@ ---- -title: ENUM -summary: CockroachDB's ENUM data types comprise a set of values. -toc: true -docs_area: reference.sql ---- - -A user-defined `ENUM` [data type](data-types.html) consists of a set of enumerated, static values. - -## Syntax - -To declare a new `ENUM` data type, use [`CREATE TYPE`](create-type.html): - -~~~ sql -> CREATE TYPE AS ENUM ('', '', ...); -~~~ - -where `` is the name of the new type, and `, , ...` are string literals that make up the type's set of static values. - -{{site.data.alerts.callout_info}} -You can qualify the `` of an enumerated type with a [database and schema name](sql-name-resolution.html) (e.g., `db.typename`). After the type is created, it can only be referenced from the database that contains the type. -{{site.data.alerts.end}} - -To show all `ENUM` types in the database, including all `ENUMS` created implicitly for [multi-region databases](movr-flask-overview.html), use [`SHOW ENUMS`](show-enums.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW ENUMS; -~~~ - -To modify an `ENUM` type, use [`ALTER TYPE`](alter-type.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TYPE ADD VALUE ''; -~~~ - -where `` is a string literal to add to the existing list of type values. You can also use `ALTER TYPE` to rename types, rename type values, set a type's schema, or change the type owner's [role specification](grant.html). - -To drop the type, use [`DROP TYPE`](drop-type.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> DROP TYPE ; -~~~ - -## Required privileges - -- To [create a type](create-type.html) in a database, a user must have the `CREATE` [privilege](security-reference/authorization.html#managing-privileges) on the database. -- To [drop a type](drop-type.html), a user must be the owner of the type. -- To [alter a type](alter-type.html), a user must be the owner of the type. -- To [grant privileges](grant.html) on a type, a user must have the `GRANT` privilege and the privilege that they want to grant. -- To create an object that depends on a type, a user must have the `USAGE` privilege on the type. - -## Example - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TYPE status AS ENUM ('open', 'closed', 'inactive'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW ENUMS; -~~~ - -~~~ - schema | name | value ----------+--------+----------------------- - public | status | open|closed|inactive -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE accounts ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - balance DECIMAL, - status status -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts(balance,status) VALUES (500.50,'open'), (0.00,'closed'), (1.25,'inactive'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ - id | balance | status ----------------------------------------+---------+----------- - 3848e36d-ebd4-44c6-8925-8bf24bba957e | 500.50 | open - 60928059-ef75-47b1-81e3-25ec1fb6ff10 | 0.00 | closed - 71ae151d-99c3-4505-8e33-9cda15fce302 | 1.25 | inactive -(3 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE accounts; -~~~ - -~~~ - table_name | create_statement --------------+-------------------------------------------------- - accounts | CREATE TABLE public.accounts ( - | id UUID NOT NULL DEFAULT gen_random_uuid(), - | balance DECIMAL NULL, - | status public.status NULL, - | CONSTRAINT accounts_pkey PRIMARY KEY (id ASC) - | ) -(1 row) -~~~ - - -## Supported casting and conversion - -`ENUM` data type values can be [cast](data-types.html#data-type-conversions-and-casts) to [`STRING`s](string.html). - -Values can be cast explicitly or implicitly. For example, the following [`SELECT`](select-clause.html) statements are equivalent: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts WHERE status::STRING='open'; -~~~ - -~~~ - id | balance | status ----------------------------------------+---------+--------- - 3848e36d-ebd4-44c6-8925-8bf24bba957e | 500.50 | open -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts WHERE status='open'; -~~~ - -~~~ - id | balance | status ----------------------------------------+---------+--------- - 3848e36d-ebd4-44c6-8925-8bf24bba957e | 500.50 | open -(1 row) -~~~ - -### Comparing enumerated types - -To compare two enumerated types, you must explicitly cast both types as `STRING`s. For example: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TYPE inaccessible AS ENUM ('closed', 'inactive'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE notifications ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - status inaccessible, - message STRING -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO notifications(status, message) VALUES ('closed', 'This account has been closed.'),('inactive', 'This account is on hold.'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT - accounts.id, notifications.message - FROM accounts JOIN notifications ON accounts.status = notifications.status; -~~~ - -~~~ -ERROR: unsupported comparison operator: = -SQLSTATE: 22023 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT - accounts.id, notifications.message - FROM accounts JOIN notifications ON accounts.status::STRING = notifications.status; -~~~ - -~~~ -ERROR: unsupported comparison operator: = -SQLSTATE: 22023 -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT - accounts.id, notifications.message - FROM accounts JOIN notifications ON accounts.status::STRING = notifications.status::STRING; -~~~ - -~~~ - id | message ----------------------------------------+-------------------------------- - 285336c4-ca1f-490d-b0df-146aae94f5aa | This account is on hold. - 583157d5-4f34-43e5-a4d4-51db77feb391 | This account has been closed. -(2 rows) -~~~ - -## See also - -- [Data Types](data-types.html) -- [`CREATE TYPE`](create-type.html) -- [`ALTER TYPE`](alter-type.html) -- [`SHOW ENUMS`](show-enums.html) -- [`DROP TYPE`](drop-type.html) diff --git a/src/current/v22.1/error-handling-and-troubleshooting.md b/src/current/v22.1/error-handling-and-troubleshooting.md deleted file mode 100644 index d6bdae56d5e..00000000000 --- a/src/current/v22.1/error-handling-and-troubleshooting.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -title: Troubleshoot Common Problems -summary: How to troubleshoot problems and handle transaction retry errors during application development -toc: true -docs_area: develop ---- - -This page has instructions for handling errors and troubleshooting problems that may arise during application development. - -## Troubleshoot query problems - -If you are not satisfied with your SQL query performance, follow the instructions in [Optimize Statement Performance Overview][fast] to be sure you are avoiding common performance problems like full table scans, missing indexes, etc. - -If you have already optimized your SQL queries as described in [Optimize Statement Performance Overview][fast] and are still having issues such as: - -- [Hanging or "stuck" queries](query-behavior-troubleshooting.html#hanging-or-stuck-queries), usually due to [contention](performance-best-practices-overview.html#transaction-contention) with long-running transactions -- Queries that are slow some of the time (but not always) -- Low throughput of queries - -Take a look at [Troubleshoot SQL Behavior](query-behavior-troubleshooting.html). - -{% include {{ page.version.version }}/prod-deployment/check-sql-query-performance.md %} - -## Transaction retry errors - -Messages with [the PostgreSQL error code `40001` and the string `restart transaction`](common-errors.html#restart-transaction) indicate that a transaction failed because it [conflicted with another concurrent or recent transaction accessing the same data](performance-best-practices-overview.html#transaction-contention). The transaction needs to be retried by the client. - -If your language's client driver or ORM implements transaction retry logic internally (e.g., if you are using Python and [SQLAlchemy with the CockroachDB dialect](build-a-python-app-with-cockroachdb-sqlalchemy.html)), then you do not need to handle this logic from your application. - -If your driver or ORM does not implement this logic, then you will need to implement a retry loop in your application. - -{% include {{page.version.version}}/misc/client-side-intervention-example.md %} - -{{site.data.alerts.callout_info}} -If a consistently high percentage of your transactions are resulting in [transaction retry errors with the error code `40001` and the string `restart transaction`](common-errors.html#restart-transaction), then you may need to evaluate your [schema design](schema-design-overview.html) and data access patterns to find and remove sources of contention. For more information about contention, see [Transaction Contention](performance-best-practices-overview.html#transaction-contention). - -For more information about what is causing a specific transaction retry error code, see the [Transaction Retry Error Reference](transaction-retry-error-reference.html). -{{site.data.alerts.end}} - -For more information about transaction retry errors, see [Transaction retries](transactions.html#client-side-intervention). - -## Unsupported SQL features - -CockroachDB has support for [most SQL features](sql-feature-support.html). - -Additionally, CockroachDB supports [the PostgreSQL wire protocol and the majority of its syntax](postgresql-compatibility.html). This means that existing applications can often be migrated to CockroachDB without changing application code. - -However, you may encounter features of SQL or the PostgreSQL dialect that are not supported by CockroachDB. For example, the following PostgreSQL features are not supported: - -{% include {{page.version.version}}/sql/unsupported-postgres-features.md %} - -For more information about the differences between CockroachDB and PostgreSQL feature support, see [PostgreSQL Compatibility](postgresql-compatibility.html). - -For more information about the SQL standard features supported by CockroachDB, see [SQL Feature Support](sql-feature-support.html). - -## Troubleshoot cluster problems - -As a developer, you will mostly be working with the CockroachDB [SQL API](sql-statements.html). - -However, you may need to access the underlying cluster to troubleshoot issues where the root cause is not your SQL, but something happening at the cluster level. Symptoms of cluster-level issues can include: - -- Cannot join a node to an existing cluster -- Networking, client connection, or authentication issues -- Clock sync, replication, or node liveness issues -- Capacity planning, storage, or memory issues -- Node decommissioning failures - -For more information about how to troubleshoot cluster-level issues, see [Troubleshoot Cluster Setup](cluster-setup-troubleshooting.html). - -## Troubleshoot SQL client application problems - -### High client CPU load or connection pool exhaustion when SCRAM Password-based Authentication is enabled - -When [SASL/SCRAM-SHA-256 Secure Password-based Authentication](security-reference/scram-authentication.html) (SCRAM Authentication) is enabled on a cluster, some additional CPU load is incurred on client applications, which are responsible for handling SCRAM hashing. It's important to plan for this additional CPU load to avoid performance degradation, CPU starvation, and connection pool exhaustion on the client. For example, the following set of circumstances can exhaust the client application's resources: - -1. SCRAM Authentication is enabled on the cluster. -1. The client driver's connection pool has no defined maximum number of connections. -1. The client application issues transactions concurrently. - -In this situation, each new connection uses more CPU on the client application server than connecting to a cluster without SCRAM Authentication enabled. Because of this additional CPU load, each concurrent transaction is slower, and a larger quantity of concurrent transactions can accumulate, in conjunction with a larger number of concurrent connections. In this situation, it can be difficult for the client application server to recover. - -To mitigate against this situation, Cockroach Labs recommends that you: - -{% include_cached {{page.version.version}}/scram-authentication-recommendations.md %} - - -## See also - -### Tasks - -- [Connect to a CockroachDB Cluster](connect-to-the-database.html) -- [Run Multi-Statement Transactions](run-multi-statement-transactions.html) -- [Optimize Statement Performance Overview][fast] - -### Reference - -- [Common Errors and Solutions](common-errors.html) -- [Transactions](transactions.html) -- [Transaction retries](transactions.html#client-side-intervention) -- [SQL Layer][sql] - - - -[sql]: architecture/sql-layer.html -[fast]: make-queries-fast.html diff --git a/src/current/v22.1/eventlog.md b/src/current/v22.1/eventlog.md deleted file mode 100644 index 2dd3c072d82..00000000000 --- a/src/current/v22.1/eventlog.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: Notable Event Types -summary: Reference documentation for notable event types in logs. -toc: true -docs_area: reference.logging ---- - -{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/{{ page.release_info.crdb_branch_name }}/docs/generated/eventlog.md %} diff --git a/src/current/v22.1/example-apps.md b/src/current/v22.1/example-apps.md deleted file mode 100644 index a2a4d141902..00000000000 --- a/src/current/v22.1/example-apps.md +++ /dev/null @@ -1,99 +0,0 @@ ---- -title: Example Apps -summary: Examples that show you how to build simple applications with CockroachDB -tags: golang, python, java -toc: true -docs_area: develop -key: build-an-app-with-cockroachdb.html ---- - -The examples in this section show you how to build simple applications using CockroachDB. - -Click the links in the tables below to see simple but complete example applications for each supported language and library combination. - -If you are looking to do a specific task such as connect to the database, insert data, or run multi-statement transactions, see [this list of tasks](#tasks). - -{{site.data.alerts.callout_info}} -Applications may encounter incompatibilities when using advanced or obscure features of a driver or ORM with **partial** support. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -Note that tools with [**community-level** support](community-tooling.html) have been tested or developed by the CockroachDB community, but are not officially supported by Cockroach Labs. If you encounter problems with using these tools, please contact the maintainer of the tool with details. -{{site.data.alerts.end}} - -## JavaScript/TypeScript - -| Driver/ORM Framework | Support level | Example apps | -|---------------------------------------------------------+----------------+--------------------------------------------------------| -| [node-postgres](https://www.npmjs.com/package/pg) | Full | [Quickstart](../cockroachcloud/quickstart.html?filters=node)
    [AWS Lambda](deploy-lambda-function.html)
    [Simple CRUD](build-a-nodejs-app-with-cockroachdb.html) -| [Sequelize](https://www.npmjs.com/package/sequelize) | Full | [Simple CRUD](build-a-nodejs-app-with-cockroachdb-sequelize.html) -| [Knex.js](https://knexjs.org/) | Full | [Simple CRUD](build-a-nodejs-app-with-cockroachdb-knexjs.html) -| [Prisma](https://prisma.io) | Full | [Simple CRUD](build-a-nodejs-app-with-cockroachdb-prisma.html)
    [React Web App (Netlify)](deploy-app-netlify.html)
    [React Web App (Next.js/Vercel)](deploy-app-vercel.html) -| [TypeORM](https://www.npmjs.com/package/typeorm) | Full | [Simple CRUD](build-a-typescript-app-with-cockroachdb.html) - -## Python - -| Driver/ORM Framework | Support level | Example apps | -|-----------------------------------------------------------------+----------------+--------------------------------------------------------| -| [psycopg2](https://www.psycopg.org/docs/install.html) | Full | [Quickstart](../cockroachcloud/quickstart.html?filters=python)
    [AWS Lambda](deploy-lambda-function.html)
    [Simple CRUD](build-a-python-app-with-cockroachdb.html) -| [SQLAlchemy](https://www.sqlalchemy.org/) | Full | [Simple CRUD](build-a-python-app-with-cockroachdb-sqlalchemy.html)
    [Multi-region Flask Web App](movr-flask-overview.html) -| [Django](https://pypi.org/project/Django/) | Full | [Simple CRUD](build-a-python-app-with-cockroachdb-django.html) - -## Go - -| Driver/ORM Framework | Support level | Example apps | -|--------------------------------------------------+----------------+--------------------------------------------------------| -| [pgx](https://github.com/jackc/pgx/releases) | Full | [Quickstart](../cockroachcloud/quickstart.html?filters=go)
    [Simple CRUD](build-a-go-app-with-cockroachdb.html) -| [GORM](https://github.com/jinzhu/gorm/releases) | Full | [Simple CRUD](build-a-go-app-with-cockroachdb-gorm.html) -| [pq](https://github.com/lib/pq) | Full | [Simple CRUD](build-a-go-app-with-cockroachdb-pq.html) -| [upper/db](https://github.com/upper/db) | Full | [Simple CRUD](build-a-go-app-with-cockroachdb-upperdb.html) - -## Java - -| Driver/ORM Framework | Support level | Example apps | -|--------------------------------------------+----------------+--------------------------------------------------------| -| [JDBC](https://jdbc.postgresql.org/) | Full | [Quickstart](../cockroachcloud/quickstart.html?filters=java)
    [Simple CRUD](build-a-java-app-with-cockroachdb.html)
    [Roach Data (Spring Boot App)](build-a-spring-app-with-cockroachdb-jdbc.html) -| [Hibernate](https://hibernate.org/orm/) | Full | [Simple CRUD](build-a-java-app-with-cockroachdb-hibernate.html)
    [Roach Data (Spring Boot App)](build-a-spring-app-with-cockroachdb-jpa.html) -| [jOOQ](https://www.jooq.org/) | Full | [Simple CRUD](build-a-java-app-with-cockroachdb-jooq.html) - -## Ruby - -| Driver/ORM Framework | Support level | Example apps | -|-----------------------------------------------------------+----------------+--------------------------------------------------------| -| [pg](https://rubygems.org/gems/pg) | Full | [Simple CRUD](build-a-ruby-app-with-cockroachdb.html) -| [Active Record](https://rubygems.org/gems/activerecord) | Full | [Simple CRUD](build-a-ruby-app-with-cockroachdb-activerecord.html) - -## C# - -| Driver/ORM Framework | Support level | Example apps | -|-----------------------------------------------------------+----------------+--------------------------------------------------------| -| [Npgsql](https://www.npgsql.org/) | Partial | [Simple CRUD](build-a-csharp-app-with-cockroachdb.html) - -## Rust - -| Driver/ORM Framework | Support level | Example apps | -|------------------------------------------------+----------------+--------------------------------------------------------| -| [Rust-Postgres](https://github.com/sfackler/rust-postgres) | Partial | [Simple CRUD](build-a-rust-app-with-cockroachdb.html) - - -## See also - -Reference information: - -- [Client drivers](install-client-drivers.html) -- [Third-Party Tools Supported by Cockroach Labs](third-party-database-tools.html) -- [Third-Party Tools Supported by the Community](community-tooling.html) -- [Connection parameters](connection-parameters.html) -- [Transactions](transactions.html) -- [Performance best practices](performance-best-practices-overview.html) - - - -Specific tasks: - -- [Connect to the Database](connect-to-the-database.html) -- [Insert Data](insert-data.html) -- [Query Data](query-data.html) -- [Update Data](update-data.html) -- [Delete Data](delete-data.html) -- [Optimize Statement Performance](make-queries-fast.html) -- [Run Multi-Statement Transactions](run-multi-statement-transactions.html) -- [Error Handling and Troubleshooting](error-handling-and-troubleshooting.html) diff --git a/src/current/v22.1/experimental-audit.md b/src/current/v22.1/experimental-audit.md deleted file mode 100644 index 76b66e9907f..00000000000 --- a/src/current/v22.1/experimental-audit.md +++ /dev/null @@ -1,128 +0,0 @@ ---- -title: EXPERIMENTAL_AUDIT -summary: Use the EXPERIMENTAL_AUDIT subcommand to turn SQL audit logging on or off for a table. -toc: true -docs_area: reference.sql ---- - -`EXPERIMENTAL_AUDIT` is a subcommand of [`ALTER TABLE`](alter-table.html). When applied to a table, it enables or disables the recording of SQL audit events to the [`SENSITIVE_ACCESS`](logging.html#sensitive_access) logging channel for that table. - -{{site.data.alerts.callout_info}} -The `SENSITIVE_ACCESS` log output is also called the SQL audit log. See [SQL Audit Logging](sql-audit-logging.html) for a detailed example. -{{site.data.alerts.end}} - -SQL audit logs contain detailed information about queries being executed against your system, including: - -- Full text of the query (which may include personally identifiable information (PII)) -- Date/Time -- Client address -- Application name - -{{site.data.alerts.callout_success}} -For descriptions of all SQL audit event types and their fields, see [Notable Event Types](eventlog.html#sql-access-audit-events). -{{site.data.alerts.end}} - -CockroachDB stores audit log information in a way that ensures durability, but negatively impacts performance. As a result, we recommend using SQL audit logs for security purposes only. For more information, see [Performance considerations](#performance-considerations). - -{{site.data.alerts.callout_info}} -{% include feature-phases/preview.md %} -{{site.data.alerts.end}} - -{% include {{ page.version.version }}/sql/combine-alter-table-commands.md %} - -## Synopsis - -
    -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/experimental_audit.html %} -
    - -## Required privileges - -Only members of the `admin` role can enable audit logs on a table. By default, the `root` user belongs to the `admin` role. - -## Parameters - - Parameter | Description ---------------+---------------------------------------------------------- - `table_name` | The name of the table you want to create audit logs for. - `READ` | Log all table reads to the audit log file. - `WRITE` | Log all table writes to the audit log file. - `OFF` | Turn off audit logging. - -{{site.data.alerts.callout_info}} -This command logs all reads and writes, and both the READ and WRITE parameters are required (as shown in the examples below). Logging for only reads or only writes is not supported. -{{site.data.alerts.end}} - -## Audit log file format - -Audit log messages, like all [log messages](logging-overview.html), consist of two sections: - -- A payload that contains notable events structured in JSON. These can include information such as the application name, full text of the query (which may contain PII), user account that triggered the event, number of rows produced (e.g., for `SELECT`) or processed (e.g., for `INSERT` or `UPDATE`), status of the query, and more. For more information on the possible event types logged to the `SENSITIVE_ACCESS` channel, see [Notable Event Types](eventlog.html#sql-access-audit-events). -- An envelope that contains event metadata (e.g., severity, date, timestamp, channel). Depending on the log format you specify when [configuring logs](configure-logs.html), the envelope can be formatted either as JSON or as a flat prefix to the message. - -## Audit log file storage location - -By [default](configure-logs.html#default-logging-configuration), audit logs are prefixed `cockroach-sql-audit` and are stored in the [same directory](configure-logs.html#logging-directory) as the other logs generated by CockroachDB. - -To store the audit log files in a specific directory, [configure the `SENSITIVE_ACCESS` channel](configure-logs.html#output-to-files) with a custom `dir` path. - -{{site.data.alerts.callout_success}} -If your deployment requires particular lifecycle and access policies for audit log files, point `SENSITIVE_ACCESS` to a directory that has permissions set so that only CockroachDB can create/delete files. -{{site.data.alerts.end}} - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Performance considerations - -To ensure [non-repudiation](https://en.wikipedia.org/wiki/Non-repudiation) in audit logs, we recommend [enabling `auditable`](configure-logs.html#configure-log-sinks) for the `SENSITIVE_ACCESS` channel. CockroachDB will then synchronously log all of the activity of every user on a cluster in a way that is durable to system failures. Note that every query that causes a logging event must access the disk of the node on which audit logging is enabled. As a result, enabling `auditable` on a logging channel negatively impacts performance, and we recommend using this setting for security purposes only. - -For debugging and troubleshooting on production clusters, the most performant way to log all queries is to enable the `SQL_EXEC` logging channel. For details, see [Logging Use Cases](logging-use-cases.html#sql_exec). - -## Examples - -### Turn on audit logging - -Let's say you have a `customers` table that contains personally identifiable information (PII). To turn on audit logs for that table, run the following command: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER TABLE customers EXPERIMENTAL_AUDIT SET READ WRITE; -~~~ - -Now, every access of customer data is logged to the `SENSITIVE_ACCESS` channel in a [`sensitive_table_access`](eventlog.html#sensitive_table_access) event that looks like the following: - -~~~ -I210323 18:50:10.951550 1182 8@util/log/event_log.go:32 ⋮ [n1,client=‹[::1]:49851›,hostnossl,user=root] 4 ={"Timestamp":1616525410949087000,"EventType":"sensitive_table_access","Statement":"‹SELECT * FROM \"\".\"\".customers›","User":"‹root›","DescriptorID":52,"ApplicationName":"‹$ cockroach sql›","ExecMode":"exec","NumRows":2,"Age":2.514,"FullTableScan":true,"TxnCounter":38,"TableName":"‹defaultdb.public.customers›","AccessMode":"r"} -~~~ - -{{site.data.alerts.callout_info}} -The above example shows the default [`crdb-v2`](log-formats.html#format-crdb-v2) log format. This can be changed to a different format (e.g., JSON). For details, see [Configure Logs](configure-logs.html#file-logging-format). -{{site.data.alerts.end}} - -{{site.data.alerts.callout_success}} -For descriptions of all SQL audit event types and their fields, see [Notable Event Types](eventlog.html#sql-access-audit-events). -{{site.data.alerts.end}} - -To turn on auditing for more than one table, issue a separate `ALTER` statement for each table. - -{{site.data.alerts.callout_success}} -For a more detailed example, see [SQL Audit Logging](sql-audit-logging.html). -{{site.data.alerts.end}} - -### Turn off audit logging - -To turn off logging, issue the following command: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER TABLE customers EXPERIMENTAL_AUDIT SET OFF; -~~~ - -## See also - -- [SQL Audit Logging](sql-audit-logging.html) -- [Logging Overview](logging-overview.html) -- [`ALTER TABLE`](alter-table.html) -- [`SHOW JOBS`](show-jobs.html) diff --git a/src/current/v22.1/explain-analyze.md b/src/current/v22.1/explain-analyze.md deleted file mode 100644 index 2dea5f6ca85..00000000000 --- a/src/current/v22.1/explain-analyze.md +++ /dev/null @@ -1,409 +0,0 @@ ---- -title: EXPLAIN ANALYZE -summary: The EXPLAIN ANALYZE statement executes a query and generates a physical statement plan with execution statistics. -toc: true -docs_area: reference.sql ---- - -The `EXPLAIN ANALYZE` [statement](sql-statements.html) **executes a SQL query** and generates a statement plan with execution statistics. Statement plans provide information around SQL execution, which can be used to troubleshoot slow queries by figuring out where time is being spent, how long a processor (i.e., a component that takes streams of input rows and processes them according to a specification) is not doing work, etc. The `(DISTSQL)` option returns the statement plan and performance statistics as well as a generated link to a graphical distributed SQL physical statement plan tree. For more information about distributed SQL queries, see the [DistSQL section of our SQL layer architecture docs](architecture/sql-layer.html#distsql). The `(DEBUG)` option generates a URL to download a bundle with more details about the statement plan for advanced debugging. - -{{site.data.alerts.callout_info}} -{% include {{ page.version.version }}/sql/physical-plan-url.md %} -{{site.data.alerts.end}} - -## Aliases - -`EXPLAIN ANALYSE` is an alias for `EXPLAIN ANALYZE`. - -## Synopsis - -
    {% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/explain_analyze.html %}
    - -## Parameters - -Parameter | Description --------------------|----------- -`PLAN` | _(Default)_ Execute the statement and return a statement plan with planning and execution time for an [explainable statement](sql-grammar.html#preparable_stmt). See [`PLAN` option](#plan-option). -`DISTSQL` | Execute the statement and return a statement plan and performance statistics as well as a generated link to a graphical distributed SQL physical statement plan tree. See [`DISTSQL` option](#distsql-option). -`DEBUG` | Execute the statement and generate a ZIP file containing files with detailed information about the query and the database objects referenced in the query. See [`DEBUG` option](#debug-option). -`preparable_stmt` | The [statement](sql-grammar.html#preparable_stmt) you want to execute and analyze. All preparable statements are explainable. - -## Required privileges - -The user requires the appropriate [privileges](security-reference/authorization.html#managing-privileges) for the statement being explained. - -## Success responses - -A successful `EXPLAIN ANALYZE` statement returns a table with the following details in the `info` column: - - Detail | Description ---------|------------ -[Global properties](#global-properties) | The properties and statistics that apply to the entire statement plan. -[Statement plan tree properties](#statement-plan-tree-properties) | A tree representation of the hierarchy of the statement plan. -Node details | The properties, columns, and ordering details for the current statement plan node in the tree. -Time | The time details for the statement. The total time is the planning and execution time of the statement. The execution time is the time it took for the final statement plan to complete. The network time is the amount of time it took to distribute the statement across the relevant nodes in the cluster. Some statements do not need to be distributed, so the network time is 0ms. - -If you use the [`DISTSQL` option](#distsql-option), the statement will also return a URL generated for a physical statement plan that provides high level information about how a statement will be executed. {% include {{ page.version.version }}/sql/physical-plan-url.md %} For details about reading the physical statement plan, see [DistSQL plan diagram](#distsql-plan-diagram). - -If you use the [`DEBUG` option](#debug-option), the statement will return only a URL and instructions to download the `DEBUG` bundle, which includes the physical statement plan. - -### Global properties - -Property | Description -----------------|------------ -planning time | The total time the planner took to create a statement plan. -execution time | The time it took for the final statement plan to complete. -distribution | Whether the statement was distributed or local. If `distribution` is `full`, execution of the statement is performed by multiple nodes in parallel, then the results are returned by the gateway node. If `local`, the execution plan is performed only on the gateway node. Even if the execution plan is `local`, row data may be fetched from remote nodes, but the processing of the data is performed by the local node. -vectorized | Indicates whether the [vectorized execution engine](vectorized-execution.html) was used in this statement. -rows read from KV | The number of rows read from the [storage layer](architecture/storage-layer.html). -cumulative time spent in KV | The total amount of time spent in the storage layer. -maximum memory usage | The maximum amount of memory used by this statement anytime during its execution. -network usage | The amount of data transferred over the network while the statement was executed. If the value is 0 B, the statement was executed on a single node and didn't use the network. -regions | The [regions](show-regions.html) where the affected nodes were located. -max sql temp disk usage | ([`DISTSQL`](#distsql-option) option only) How much disk spilling occurs when executing a query. This property is displayed only when the disk usage is greater than zero. - -### Statement plan tree properties - -Statement plan tree properties | Description --------------------------------|------------ -processor | Each processor in the statement plan hierarchy has a node with details about that phase of the statement. For example, a statement with a `GROUP BY` clause has a `group` processor with details about the cluster nodes, rows, and operations related to the `GROUP BY` operation. -nodes | The names of the CockroachDB cluster nodes affected by this phase of the statement. -regions | The [regions](show-regions.html) where the affected nodes were located. -actual row count | The actual number of rows affected by this processor during execution. -KV time | The total time this phase of the statement was in the [storage layer](architecture/storage-layer.html). -KV contention time | The time the [storage layer](architecture/storage-layer.html) was in contention during this phase of the statement. -KV rows read | During scans, the number of rows in the [storage layer](architecture/storage-layer.html) read by this phase of the statement. -KV bytes read | During scans, the amount of data read from the [storage layer](architecture/storage-layer.html) during this phase of the statement. -estimated max memory allocated | The estimated maximum allocated memory for a statement. -estimated max sql temp disk usage | The estimated maximum temporary disk usage for a statement. -MVCC step count (ext/int) | The number of times that the underlying storage iterator stepped forward during the work to serve the operator's reads, including stepping over [MVCC keys](architecture/storage-layer.html#mvcc) that could not be used in the scan. -MVCC seek count (ext/int) | The number of times that the underlying storage iterator jumped (seeked) to a different data location. -estimated row count | The estimated number of rows affected by this processor according to the statement planner, the percentage of the table the query spans, and when the statistics for the table were last collected. -table | The table and index used in a scan operation in a statement, in the form `{table name}@{index name}`. -spans | The interval of the key space read by the processor. If `spans` is `FULL SCAN`, the table is scanned on all key ranges of the index. If `spans` is `[/1 - /1]`, only the key with value `1` is read by the processor. - -## `PLAN` option - -By default, `EXPLAIN ANALYZE` uses the `PLAN` option. `EXPLAIN ANALYZE` and `EXPLAIN ANALYZE (PLAN)` produce the same output. - -### `PLAN` options - -The `PLAN` options `VERBOSE` and `TYPES` described in [`EXPLAIN` options](explain.html#options) are also supported. For an example, see [`EXPLAIN ANALYZE (VERBOSE)`](#explain-analyze-verbose). - -## `DISTSQL` option - -`EXPLAIN ANALYZE (DISTSQL)` generates a physical statement in the [plan diagram](#distsql-plan-diagram). The DistSQL plan diagram displays the physical statement plan, as well as execution statistics. The statistics listed depend on the query type and the [execution engine used](vectorized-execution.html). If the query contains subqueries or post-queries there will be multiple diagrams. - -{{site.data.alerts.callout_info}} -You can use `EXPLAIN ANALYZE (DISTSQL)` only as the top-level statement in a query. -{{site.data.alerts.end}} - -### DistSQL plan diagram - -The graphical plan diagram displays the processors and operations that make up the statement plan. While the text output from the `PLAN` option shows the statement plan across the cluster, the `DISTSQL` option shows details on each node involved in the query. - -Field | Description | Execution engine -------+-------------+---------------- -<Processor>/<id> | The processor and processor ID used to read data into the SQL execution engine.

    A processor is a component that takes streams of input rows, processes them according to a specification, and outputs one stream of rows. For example, a `TableReader `processor reads in data, and an `Aggregator` aggregates input rows. | Both -<table>@<index> | The index used by the processor. | Both -Spans | The interval of the key space read by the processor. For example, `[/1 - /1]` indicates that only the key with value `1` is read by the processor. | Both -Out | The output columns. | Both -KV time | The total time this phase of the query was in the [storage layer](architecture/storage-layer.html). | Both -KV contention time | The time the storage layer was in contention during this phase of the query. | Both -KV rows read | During scans, the number of rows in the storage layer read by this phase of the query. | Both -KV bytes read | During scans, the amount of data read from the storage layer during this phase of the query. | Both -cluster nodes | The names of the CockroachDB cluster nodes involved in the execution of this processor. | Both -batches output | The number of batches of columnar data output. | Vectorized engine only -rows output | The number of rows output. | Vectorized engine only -IO time | How long the TableReader processor spent reading data from disk. | Vectorized engine only -stall time | How long the processor spent not doing work. This is aggregated into the stall time numbers as the query progresses down the tree (i.e., stall time is added up and overlaps with previous time). | Row-oriented engine only -bytes read | The size of the data read by the processor. | Both -rows read | The number of rows read by the processor. | Both -@<n> | The index of the column relative to the input. | Both -max memory used | How much memory (if any) is used to buffer rows. | Row-oriented engine only -max disk used | How much disk (if any) is used to buffer data. Routers and processors will spill to disk buffering if there is not enough memory to buffer the data. | Row-oriented engine only -execution time | How long the engine spent executing the processor. | Vectorized engine only -max vectorized memory allocated | How much memory is allocated to the processor to buffer batches of columnar data. | Vectorized engine only -max vectorized disk used | How much disk (if any) is used to buffer columnar data. Processors will spill to disk buffering if there is not enough memory to buffer the data. | Vectorized engine only -left(@<n>)=right(@<n>) | The equality columns used in the join. | Both -stored side | The smaller table that was stored as an in-memory hash table. | Both -rows routed | How many rows were sent by routers, which can be used to understand network usage. | Row-oriented engine only -network latency | The latency time in nanoseconds between nodes in a stream. | Vectorized engine only -bytes sent | The number of actual bytes sent (i.e., encoding of the rows). This is only relevant when doing network communication. | Both -Render | The stage that renders the output. | Both -by hash | _(Orange box)_ The router, which is a component that takes one stream of input rows and sends them to a node according to a routing algorithm.

    For example, a hash router hashes columns of a row and sends the results to the node that is aggregating the result rows. | Both -unordered / ordered | _(Blue box)_ A synchronizer that takes one or more output streams and merges them to be consumable by a processor. An ordered synchronizer is used to merge ordered streams and keeps the rows in sorted order. | Both -<data type> | If you specify [`EXPLAIN (DISTSQL, TYPES)`](explain.html#distsql-option), lists the data types of the input columns. | Both -Response | The response back to the client. | Both - -## `DEBUG` option - -`EXPLAIN ANALYZE (DEBUG)` executes a query and generates a link to a ZIP file that contains the [physical statement plan](#distsql-plan-diagram), execution statistics, statement tracing, and other information about the query. - -File | Description ---------------------+------------------- -`stats-{table}.sql` | Contains [statistics](create-statistics.html) for a table in the query. -`schema.sql` | Contains [`CREATE`](create-table.html) statements for objects in the query. -`env.sql` | Contains information about the CockroachDB environment. -`trace.txt` | Contains [statement traces](show-trace.html) in plaintext format. -`trace.json` | Contains statement traces in JSON format. -`trace-jaeger.json` | Contains statement traces in JSON format that can be [imported to Jaeger](query-behavior-troubleshooting.html#visualize-statement-traces-in-jaeger). -`distsql.html` | The query's [physical statement plan](#distsql-plan-diagram). This diagram is identical to the one generated by [`EXPLAIN (DISTSQL)`](explain.html#distsql-option). -`plan.txt` | The query execution plan. This is identical to the output of [`EXPLAIN (VERBOSE)`](explain.html#verbose-option). -`opt.txt` | The statement plan tree generated by the [cost-based optimizer](cost-based-optimizer.html). This is identical to the output of [`EXPLAIN (OPT)`](explain.html#opt-option). -`opt-v.txt` | The statement plan tree generated by the cost-based optimizer, with cost details. This is identical to the output of [`EXPLAIN (OPT, VERBOSE)`](explain.html#opt-option). -`opt-vv.txt` | The statement plan tree generated by the cost-based optimizer, with cost details and input column data types. This is identical to the output of [`EXPLAIN (OPT, TYPES)`](explain.html#opt-option). -`vec.txt` | The statement plan tree generated by the [vectorized execution](vectorized-execution.html) engine. This is identical to the output of [`EXPLAIN (VEC)`](explain.html#vec-option). -`vec-v.txt` | The statement plan tree generated by the vectorized execution engine. This is identical to the output of [`EXPLAIN (VEC, VERBOSE)`](explain.html#vec-option). -`statement.txt` | The SQL statement for the query. - -You can obtain this ZIP file by following the link provided in the `EXPLAIN ANALYZE (DEBUG)` output, or by activating [statement diagnostics](ui-statements-page.html#diagnostics) in the DB Console. - -{% include common/sql/statement-bundle-warning.md %} - -## Examples - -The following examples use the [`movr` example dataset](cockroach-demo.html#datasets). - -{% include {{ page.version.version }}/demo_movr.md %} - -### `EXPLAIN ANALYZE` - -Use `EXPLAIN ANALYZE` without an option, or equivalently with the `PLAN` option, to execute a query and display the physical statement plan with execution statistics. - -For example, the following `EXPLAIN ANALYZE` statement executes a simple query against the [MovR database](movr.html) and then displays the physical statement plan with execution statistics: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN ANALYZE SELECT city, AVG(revenue) FROM rides GROUP BY city; -~~~ - -~~~ - planning time: 604µs - execution time: 51ms - distribution: full - vectorized: true - rows read from KV: 125,000 (21 MiB) - cumulative time spent in KV: 106ms - maximum memory usage: 5.0 MiB - network usage: 2.6 KiB (24 messages) - regions: us-east1 - - • group (streaming) - │ nodes: n1, n2, n3 - │ regions: us-east1 - │ actual row count: 9 - │ estimated row count: 9 - │ group by: city - │ ordered: +city - │ - └── • scan - nodes: n1, n2, n3 - regions: us-east1 - actual row count: 125,000 - KV time: 106ms - KV contention time: 0µs - KV rows read: 125,000 - KV bytes read: 21 MiB - estimated max memory allocated: 21 MiB - estimated row count: 125,000 (100% of the table; stats collected 1 hour ago) - table: rides@rides_pkey - spans: FULL SCAN -(30 rows) -~~~ - -If you perform a join, the estimated max memory allocation is also reported for the join. For example: - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPLAIN ANALYZE SELECT * FROM vehicles JOIN rides ON rides.vehicle_id = vehicles.id and rides.city = vehicles.city limit 100; -~~~ -~~~ - info ------------------------------------------------------ - planning time: 1ms - execution time: 18ms - distribution: full - vectorized: true - rows read from KV: 3,173 (543 KiB) - cumulative time spent in KV: 37ms - maximum memory usage: 820 KiB - network usage: 3.3 KiB (2 messages) - regions: us-east1 - - • limit - │ nodes: n1 - │ regions: us-east1 - │ actual row count: 100 - │ estimated row count: 100 - │ count: 100 - │ - └── • lookup join - │ nodes: n1, n2, n3 - │ regions: us-east1 - │ actual row count: 194 - │ KV time: 31ms - │ KV contention time: 0µs - │ KV rows read: 173 - │ KV bytes read: 25 KiB - │ estimated max memory allocated: 300 KiB - │ estimated row count: 13,837 - │ table: vehicles@vehicles_pkey - │ equality: (city, vehicle_id) = (city,id) - │ equality cols are key - │ - └── • scan - ... -(41 rows) -~~~ - -### `EXPLAIN ANALYZE (VERBOSE)` - -The `VERBOSE` option displays the physical statement plan with additional execution statistics. - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPLAIN ANALYZE (VERBOSE) SELECT city, AVG(revenue) FROM rides GROUP BY city; -~~~ - -~~~ - info --------------------------------------------------------------------------------------- - planning time: 5ms - execution time: 65ms - distribution: full - vectorized: true - rows read from KV: 125,000 (21 MiB) - cumulative time spent in KV: 114ms - maximum memory usage: 5.0 MiB - network usage: 2.6 KiB (24 messages) - regions: us-east1 - - • group (streaming) - │ columns: (city, avg) - │ nodes: n1, n2, n3 - │ regions: us-east1 - │ actual row count: 9 - │ vectorized batch count: 4 - │ estimated row count: 9 - │ aggregate 0: avg(revenue) - │ group by: city - │ ordered: +city - │ - └── • scan - columns: (city, revenue) - ordering: +city - nodes: n1, n2, n3 - regions: us-east1 - actual row count: 125,000 - vectorized batch count: 124 - KV time: 114ms - KV contention time: 0µs - KV rows read: 125,000 - KV bytes read: 21 MiB - estimated max memory allocated: 21 MiB - MVCC step count (ext/int): 125,000/125,000 - MVCC seek count (ext/int): 18/18 - estimated row count: 125,000 (100% of the table; stats collected 1 hour ago) - table: rides@rides_pkey - spans: FULL SCAN -(38 rows) -~~~ - -### `EXPLAIN ANALYZE (DISTSQL)` - -Use `EXPLAIN ANALYZE (DISTSQL)` to execute a query, display the physical statement plan with execution statistics, and generate a link to a graphical DistSQL statement plan. - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPLAIN ANALYZE (DISTSQL) SELECT city, AVG(revenue) FROM rides GROUP BY city; -~~~ - -~~~ - info ----------------------------------------------------------------------------------------------------- - planning time: 638µs - execution time: 132ms - distribution: full - vectorized: true - rows read from KV: 125,000 (21 MiB) - cumulative time spent in KV: 228ms - maximum memory usage: 7.5 MiB - network usage: 2.5 KiB (24 messages) - regions: us-east1 - - • group (streaming) - │ nodes: n1, n2, n3 - │ regions: us-east1 - │ actual row count: 9 - │ estimated row count: 9 - │ group by: city - │ ordered: +city - │ - └── • scan - nodes: n1, n2, n3 - regions: us-east1 - actual row count: 125,000 - KV time: 228ms - KV contention time: 0µs - KV rows read: 125,000 - KV bytes read: 21 MiB - estimated max memory allocated: 20 MiB - estimated row count: 125,000 (100% of the table; stats collected 1 second ago) - table: rides@rides_pkey - spans: FULL SCAN - - Diagram: https://cockroachdb.github.io/distsqlplan/decode.html#eJzUmF9u47YTx99_pyD4lMVPuxIpWZb8... -(32 rows) -~~~ - -To view the [DistSQL plan diagram](#distsql-plan-diagram), open the URL following **Diagram**. For an example, see [`DISTSQL` option](explain.html#distsql-option). - -### `EXPLAIN ANALYZE (DEBUG)` - -Use the [`DEBUG`](#debug-option) option to generate a ZIP file containing files with information about the query and the database objects referenced in the query. For example: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN ANALYZE (DEBUG) SELECT city, AVG(revenue) FROM rides GROUP BY city; -~~~ - -~~~ - info --------------------------------------------------------------------------------- - Statement diagnostics bundle generated. Download from the DB Console (Advanced - Debug -> Statement Diagnostics History), via the direct link below, or using - the SQL shell or command line. - Admin UI: http://127.0.0.1:8080 - Direct link: http://127.0.0.1:8080/_admin/v1/stmtbundle/765493679630483457 (Not available for CockroachDB {{ site.data.products.serverless }} clusters.) - SQL shell: \statement-diag download 765493679630483457 - Command line: cockroach statement-diag download 765493679630483457 -(7 rows) -~~~ - -To download the ZIP file containing the statement diagnostics, open the URL after **Direct link**, run the `\statement-diag download` command, or run `cockroach statement-diag download`. You can also obtain the bundle by activating [statement diagnostics](ui-statements-page.html#diagnostics) in the DB Console. - -## See also - -- [`ALTER TABLE`](alter-table.html) -- [`ALTER SEQUENCE`](alter-sequence.html) -- [`BACKUP`](backup.html) -- [`CANCEL JOB`](cancel-job.html) -- [`CREATE DATABASE`](create-database.html) -- [`DROP DATABASE`](drop-database.html) -- [`EXPLAIN`](explain.html) -- [`EXECUTE`](sql-grammar.html#execute_stmt) -- [`IMPORT`](import.html) -- [Indexes](indexes.html) -- [`INSERT`](insert.html) -- [`PAUSE JOB`](pause-job.html) -- [`RESET`](reset-vars.html) -- [`RESTORE`](restore.html) -- [`RESUME JOB`](resume-job.html) -- [`SELECT`](select-clause.html) -- [Selection Queries](selection-queries.html) -- [`SET`](set-vars.html) -- [`SET CLUSTER SETTING`](set-cluster-setting.html) -- [`SHOW COLUMNS`](show-columns.html) -- [`UPDATE`](update.html) -- [`UPSERT`](upsert.html) diff --git a/src/current/v22.1/explain.md b/src/current/v22.1/explain.md deleted file mode 100644 index 16fb13f67ed..00000000000 --- a/src/current/v22.1/explain.md +++ /dev/null @@ -1,981 +0,0 @@ ---- -title: EXPLAIN -summary: The EXPLAIN statement provides information you can use to optimize SQL queries. -toc: true -docs_area: reference.sql ---- - -The `EXPLAIN` [statement](sql-statements.html) returns CockroachDB's statement plan for a [preparable statement](sql-grammar.html#preparable_stmt). You can use this information to optimize the query. - -{{site.data.alerts.callout_success}} -To execute a statement and return a physical statement plan with execution statistics, use [`EXPLAIN ANALYZE`](explain-analyze.html). -{{site.data.alerts.end}} - -## Query optimization - -Using `EXPLAIN` output, you can optimize your queries as follows: - -- Restructure queries to require fewer levels of processing. Queries with fewer levels execute more quickly. -- Avoid scanning an entire table, which is the slowest way to access data. [Create indexes](indexes.html) that contain at least one of the columns that the query is filtering in its `WHERE` clause. - -You can find out if your queries are performing entire table scans by using `EXPLAIN` to see which: - -- Indexes the query uses; shown as the value of the `table` property. -- Key values in the index are being scanned; shown as the value of the `spans` property. - -You can also see the estimated number of rows that a scan will perform in the `estimated row count` property. - -For more information about indexing and table scans, see [Find the Indexes and Key Ranges a Query Uses](#find-the-indexes-and-key-ranges-a-query-uses). - -## Synopsis - -
    {% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/explain.html %}
    - -## Required privileges - -The user requires the appropriate [privileges](security-reference/authorization.html#managing-privileges) for the statement being explained. - -## Parameters - - Parameter | Description --------------------+------------ -`VERBOSE` | Show as much information as possible about the statement plan. See [`VERBOSE` option](#verbose-option). -`TYPES` | Include the intermediate [data types](data-types.html) CockroachDB chooses to evaluate intermediate SQL expressions. See [`TYPES` option](#types-option). -`OPT` | Display the statement plan tree generated by the [cost-based optimizer](cost-based-optimizer.html). See [`OPT` option](#opt-option). -`VEC` | Show detailed information about the [vectorized execution](vectorized-execution.html) plan for a query. See [`VEC` option](#vec-option). -`DISTSQL` | Generate a URL to a [distributed SQL physical statement plan diagram](explain-analyze.html#distsql-plan-diagram). See [`DISTSQL` option](#distsql-option). -`preparable_stmt` | The [statement](sql-grammar.html#preparable_stmt) you want details about. All preparable statements are explainable. - -## Success responses - -A successful `EXPLAIN` statement returns a table with the following details in the `info` column: - -Detail | Description ------------|------------- -Global properties | Properties that apply to the entire query plan. Global properties include `distribution` and `vectorized`. -Statement plan tree properties | A tree representation of the hierarchy of the statement plan. -index recommendations: N | Number of index recommendations followed by a list of index actions and SQL statements to perform the actions. -Time | The time details for the query. The total time is the planning and execution time of the query. The execution time is the time it took for the final statement plan to complete. The network time is the amount of time it took to distribute the query across the relevant nodes in the cluster. Some queries do not need to be distributed, so the network time is 0ms. - -## Examples - -The following examples use the [`movr` example dataset](cockroach-demo.html#datasets). - -{% include {{ page.version.version }}/demo_movr.md %} - -### Default statement plans - -By default, `EXPLAIN` includes the least detail about the statement plan but can be useful to find out which indexes and index key ranges are used by a query. For example: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM rides WHERE revenue > 90 ORDER BY revenue ASC; -~~~ - -~~~ - info ---------------------------------------------------------------------------------------------------------------------------------------------------- - distribution: full - vectorized: true - - • sort - │ estimated row count: 12,385 - │ order: +revenue - │ - └── • filter - │ estimated row count: 12,385 - │ filter: revenue > 90 - │ - └── • scan - estimated row count: 125,000 (100% of the table; stats collected 19 minutes ago) - table: rides@rides_pkey - spans: FULL SCAN - - index recommendations: 1 - 1. type: index creation - SQL command: CREATE INDEX ON rides (revenue) STORING (vehicle_city, rider_id, vehicle_id, start_address, end_address, start_time, end_time); -(19 rows) - - -Time: 2ms total (execution 2ms / network 0ms) -~~~ - -The output shows the tree structure of the statement plan, in this case a `sort`, a `filter`, and a `scan`. - -The output also describes a set of properties, some global to the query, some specific to an operation listed in the true structure (in this case, `sort`, `filter`, or `scan`), and an index recommendation: - -- `distribution`:`full` - - The planner chose a distributed execution plan, where execution of the query is performed by multiple nodes in parallel, then the results are returned by the gateway node. An execution plan with `full` distribution doesn't process on all nodes in the cluster. It is executed simultaneously on multiple nodes. An execution plan with `local` distribution is performed only on the gateway node. Even if the execution plan is `local`, row data may be fetched from remote nodes, but the processing of the data is performed by the local node. -- `vectorized`:`true` - - The plan will be executed with the [vectorized execution engine](vectorized-execution.html). -- `order`:`+revenue` - - The sort will be ordered ascending on the `revenue` column. -- `filter`: `revenue > 90` - - The scan filters on the `revenue` column. -- `estimated row count`:`125,000 (100% of the table; stats collected 19 minutes ago)` - - The estimated number of rows scanned by the query, in this case, `125,000` rows of data; the percentage of the table the query spans, in this case 100%; and when the statistics for the table were last collected, in this case 19 minutes ago. If you do not see statistics, you can manually generate table statistics with [`CREATE STATISTICS`](create-statistics.html) or configure more frequent statistics generation following the steps in [Control automatic statistics](cost-based-optimizer.html#table-statistics). -- `table`:`rides@rides_pkey` - - The table is scanned on the `rides_pkey` index. -- `spans`:`FULL SCAN` - - The table is scanned on all key ranges of the `rides_pkey` index (i.e., a full table scan). For more information on indexes and key ranges, see the following [example](#find-the-indexes-and-key-ranges-a-query-uses). - -- `index recommendations: 1` - - The number of index recommendations, followed by the recommendation and statement. The recommendation to create an index on the `rides` table and [store](indexes.html#storing-columns) the `vehicle_city`, `rider_id`, `vehicle_id`, `start_address`, `end_address`, `start_time`, and `end_time` columns will eliminate the full scan of the `rides` table. - - Index recommendations are displayed by default. To disable index recommendations, set the `index_recommendations_enabled` [session variable](set-vars.html) to `false`. - - -Suppose you create the recommended index: - -{% include_cached copy-clipboard.html %} -~~~ -CREATE INDEX ON rides (revenue) STORING (vehicle_city, rider_id, vehicle_id, start_address, end_address, start_time, end_time); -~~~ - -The next `EXPLAIN` call demonstrates that the estimated row count is 10% of the table: - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPLAIN SELECT * FROM rides WHERE revenue > 90 ORDER BY revenue ASC; -~~~ -~~~ - info ------------------------------------------------------------------------------------- - distribution: local - vectorized: true - - • scan - estimated row count: 12,647 (10% of the table; stats collected 22 seconds ago) - table: rides@rides_revenue_idx - spans: (/90 - ] -(7 rows) -~~~ - -If you then limit the number of returned rows: - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPLAIN SELECT * FROM rides WHERE revenue > 90 ORDER BY revenue ASC limit 10; -~~~ - -The limit is reflected both in the estimated row count and a `limit` property: - -~~~ - info ------------------------------------------------------------------------------------ - distribution: local - vectorized: true - - • scan - estimated row count: 10 (<0.01% of the table; stats collected 32 seconds ago) - table: rides@rides_revenue_idx - spans: (/90 - ] - limit: 10 -(8 rows) -~~~ - -### Join queries - -If you run `EXPLAIN` on a [join](joins.html) query, the output will display which type of join will be executed. For example, the following `EXPLAIN` output shows that the query will perform a [hash join](joins.html#hash-joins): - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM rides AS r -JOIN users AS u ON r.rider_id = u.id; -~~~ - -~~~ - info ---------------------------------------------------------------------------------------------------------------------------------------------------- - distribution: full - vectorized: true - - • hash join - │ estimated row count: 124,482 - │ equality: (rider_id) = (id) - │ - ├── • scan - │ estimated row count: 125,000 (100% of the table; stats collected 13 minutes ago) - │ table: rides@rides_pkey - │ spans: FULL SCAN - │ - └── • scan - estimated row count: 12,500 (100% of the table; stats collected 14 minutes ago) - table: users@users_pkey - spans: FULL SCAN - - index recommendations: 2 - 1. type: index creation - SQL command: CREATE INDEX ON rides (rider_id) STORING (vehicle_city, vehicle_id, start_address, end_address, start_time, end_time, revenue); - 2. type: index creation - SQL command: CREATE INDEX ON users (id) STORING (name, address, credit_card); -(22 rows) - - -Time: 2ms total (execution 2ms / network 0ms) -~~~ - -The following output shows that the query will perform a cross join: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM rides AS r -JOIN users AS u ON r.city = 'new york'; -~~~ - -~~~ - info ------------------------------------------------------------------------------------------ - distribution: full - vectorized: true - - • cross join - │ estimated row count: 178,283,221 - │ - ├── • scan - │ estimated row count: 14,263 (11% of the table; stats collected 14 minutes ago) - │ table: rides@rides_pkey - │ spans: [/'new york' - /'new york'] - │ - └── • scan - estimated row count: 12,500 (100% of the table; stats collected 15 minutes ago) - table: users@users_pkey - spans: FULL SCAN -(15 rows) - - -Time: 2ms total (execution 2ms / network 0ms) -~~~ - -### Insert queries - -`EXPLAIN` output for [`INSERT`](insert.html) queries is similar to the output for standard `SELECT` queries. For example: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN INSERT INTO users(id, city, name) VALUES ('c28f5c28-f5c2-4000-8000-000000000026', 'new york', 'Petee'); -~~~ - -~~~ - info -------------------------------------------------------- - distribution: local - vectorized: true - - • insert fast path - into: users(id, city, name, address, credit_card) - auto commit - size: 5 columns, 1 row -(7 rows) - - -Time: 1ms total (execution 1ms / network 0ms) -~~~ - -The output for this `INSERT` lists the primary operation (in this case, `insert`), and the table and columns affected by the operation in the `into` field (in this case, the `id`, `city`, `name`, `address`, and `credit_card` columns of the `users` table). The output also includes the size of the `INSERT` in the `size` field (in this case, 5 columns in a single row). - -For more complex types of `INSERT` queries, `EXPLAIN` output can include more information. For example, suppose that you create a `UNIQUE` index on the `users` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE UNIQUE INDEX ON users(city, id, name); -~~~ - -To display the `EXPLAIN` output for an [`INSERT ... ON CONFLICT` statement](insert.html#on-conflict-clause), which inserts some data that might conflict with the `UNIQUE` constraint imposed on the `name`, `city`, and `id` columns, run: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN INSERT INTO users(id, city, name) VALUES ('c28f5c28-f5c2-4000-8000-000000000026', 'new york', 'Petee') ON CONFLICT DO NOTHING; -~~~ - -~~~ - info ----------------------------------------------------------------------------------------------------------------------------------- - distribution: local - vectorized: true - - • insert - │ into: users(id, city, name, address, credit_card) - │ auto commit - │ arbiter indexes: users_pkey, users_city_id_name_key - │ - └── • lookup join (anti) - │ estimated row count: 0 - │ table: users@users_city_id_name_key - │ equality: (city_cast, column1, name_cast) = (city,id,name) - │ equality cols are key - │ - └── • cross join (anti) - │ estimated row count: 0 - │ - ├── • values - │ size: 4 columns, 1 row - │ - └── • scan - estimated row count: 1 (<0.01% of the table; stats collected 18 minutes ago) - table: users@users_city_id_name_key - spans: [/'new york'/'c28f5c28-f5c2-4000-8000-000000000026' - /'new york'/'c28f5c28-f5c2-4000-8000-000000000026'] -(24 rows) - - -Time: 3ms total (execution 3ms / network 0ms) -~~~ - -Because the `INSERT` includes an `ON CONFLICT` clause, the query requires more than a simple `insert` operation. CockroachDB must check the provided values against the values in the database, to ensure that the `UNIQUE` constraint on `name`, `city`, and `id` is not violated. The output also lists the indexes available to detect conflicts (the `arbiter indexes`), including the `users_city_id_name_key` index. - -### Alter queries - -If you alter a table to split a range as described in [Split a table](split-at.html#split-a-table), the `EXPLAIN` command returns the target table and index names and a `NULL` expiry timestamp: - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPLAIN ALTER TABLE users SPLIT AT VALUES ('chicago'), ('new york'), ('seattle'); -~~~ - -~~~ - info ----------------------------------- - distribution: local - vectorized: true - - • split - │ index: users@users_pkey - │ expiry: CAST(NULL AS STRING) - │ - └── • values - size: 1 column, 3 rows -(9 rows) -~~~ - -If you alter a table to split a range as described in [Set the expiration on a split enforcement](split-at.html#set-the-expiration-on-a-split-enforcement), the `EXPLAIN` command returns the target table and index names and the expiry timestamp: - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPLAIN ALTER TABLE vehicles SPLIT AT VALUES ('chicago'), ('new york'), ('seattle') WITH EXPIRATION '2022-08-10 23:30:00+00:00'; -~~~ - -~~~ - info ------------------------------------------ - distribution: local - vectorized: true - - • split - │ index: vehicles@vehicles_pkey - │ expiry: '2022-08-10 23:30:00+00:00' - │ - └── • values - size: 1 column, 3 rows -(9 rows) -~~~ - -### Options - -#### `VERBOSE` option - -The `VERBOSE` option includes: - -- SQL expressions that are involved in each processing stage, providing more granular detail about which portion of your query is represented at each level. -- Detail about which columns are being used by each level, as well as properties of the result set on that level. - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN (VERBOSE) SELECT * FROM rides AS r -JOIN users AS u ON r.rider_id = u.id -WHERE r.city = 'new york' -ORDER BY r.revenue ASC; -~~~ - -~~~ - info ------------------------------------------------------------------------------------------------------------------------------------------------------------------- - distribution: full - vectorized: true - - • sort - │ columns: (id, city, vehicle_city, rider_id, vehicle_id, start_address, end_address, start_time, end_time, revenue, id, city, name, address, credit_card) - │ ordering: +revenue - │ estimated row count: 14,087 - │ order: +revenue - │ - └── • hash join (inner) - │ columns: (id, city, vehicle_city, rider_id, vehicle_id, start_address, end_address, start_time, end_time, revenue, id, city, name, address, credit_card) - │ estimated row count: 14,087 - │ equality: (rider_id) = (id) - │ - ├── • scan - │ columns: (id, city, vehicle_city, rider_id, vehicle_id, start_address, end_address, start_time, end_time, revenue) - │ estimated row count: 14,087 (11% of the table; stats collected 29 minutes ago) - │ table: rides@rides_pkey - │ spans: /"new york"-/"new york"/PrefixEnd - │ - └── • scan - columns: (id, city, name, address, credit_card) - estimated row count: 12,500 (100% of the table; stats collected 42 seconds ago) - table: users@users_pkey - spans: FULL SCAN -(25 rows) - -Time: 2ms total (execution 2ms / network 0ms) -~~~ - -#### `TYPES` option - -The `TYPES` option includes - -- The types of the values used in the statement plan. -- The SQL expressions that were involved in each processing stage, and includes the columns used by each level. - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN (TYPES) SELECT * FROM rides WHERE revenue > 90 ORDER BY revenue ASC; -~~~ - -~~~ - info ----------------------------------------------------------------------------------------------------- - - distribution: full - vectorized: true - - • sort - │ columns: (id uuid, city varchar, vehicle_city varchar, rider_id uuid, vehicle_id uuid, start_address varchar, end_address varchar, start_time timestamp, end_time timestamp, revenue decimal) - │ ordering: +revenue - │ estimated row count: 12,317 - │ order: +revenue - │ - └── • filter - │ columns: (id uuid, city varchar, vehicle_city varchar, rider_id uuid, vehicle_id uuid, start_address varchar, end_address varchar, start_time timestamp, end_time timestamp, revenue decimal) - │ estimated row count: 12,317 - │ filter: ((revenue)[decimal] > (90)[decimal])[bool] - │ - └── • scan - columns: (id uuid, city varchar, vehicle_city varchar, rider_id uuid, vehicle_id uuid, start_address varchar, end_address varchar, start_time timestamp, end_time timestamp, revenue decimal) - estimated row count: 125,000 (100% of the table; stats collected 29 minutes ago) - table: rides@rides_pkey - spans: FULL SCAN -(19 rows) - -Time: 1ms total (execution 1ms / network 0ms) -~~~ - -#### `OPT` option - -To display the statement plan tree generated by the [cost-based optimizer](cost-based-optimizer.html), use the `OPT` option . For example: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN (OPT) SELECT * FROM rides WHERE revenue > 90 ORDER BY revenue ASC; -~~~ - -~~~ - info -------------------------------- - sort - └── select - ├── scan rides - └── filters - └── revenue > 90 -(5 rows) - -Time: 1ms total (execution 1ms / network 0ms) -~~~ - -`OPT` has four suboptions: [`VERBOSE`](#opt-verbose-option), [`TYPES`](#opt-types-option), [`ENV`](#opt-env-option), [`MEMO`](#opt-memo-option). - -##### `OPT, VERBOSE` option - -To include cost details used by the optimizer in planning the query, use the `OPT, VERBOSE` option: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN (OPT, VERBOSE) SELECT * FROM rides WHERE revenue > 90 ORDER BY revenue ASC; -~~~ - -~~~ - info ----------------------------------------------------------------------------------------------------- ... - sort - ├── columns: id:1 city:2 vehicle_city:3 rider_id:4 vehicle_id:5 start_address:6 end_address:7 start_time:8 end_time:9 revenue:10 - ├── immutable - ├── stats: [rows=12316.644, distinct(10)=9.90909091, null(10)=0] - │ histogram(10)= 0 0 11130 1187 - │ <--- 90 ------- 99 - ├── cost: 156091.288 - ├── key: (1,2) - ├── fd: (1,2)-->(3-10) - ├── ordering: +10 - ├── prune: (1-9) - ├── interesting orderings: (+2,+1) (+2,+4,+1) (+3,+5,+2,+1) (+8,+2,+1) (+4,+2,+1) - └── select - ├── columns: id:1 city:2 vehicle_city:3 rider_id:4 vehicle_id:5 start_address:6 end_address:7 start_time:8 end_time:9 revenue:10 - ├── immutable - ├── stats: [rows=12316.644, distinct(10)=9.90909091, null(10)=0] - │ histogram(10)= 0 0 11130 1187 - │ <--- 90 ------- 99 - ├── cost: 151266.03 - ├── key: (1,2) - ├── fd: (1,2)-->(3-10) - ├── prune: (1-9) - ├── interesting orderings: (+2,+1) (+2,+4,+1) (+3,+5,+2,+1) (+8,+2,+1) (+4,+2,+1) - ├── scan rides - │ ├── columns: id:1 city:2 vehicle_city:3 rider_id:4 vehicle_id:5 start_address:6 end_address:7 start_time:8 end_time:9 revenue:10 - │ ├── stats: [rows=125000, distinct(1)=125000, null(1)=0, distinct(2)=9, null(2)=0, distinct(10)=100, null(10)=0] - │ │ histogram(1)= 0 12 612 12 612 12 612 - <--- '00064a9c-dc44-4915-8000-00000000000c' ----- '0162f166-e008-49b0-8000-0000000002a5' ----- '02834d26-fa3f-4ca0-8000-0000000004cb' ----- '03c85c24-c404-4720- - │ │ histogram(2)= 0 14512 0 13637 0 14512 0 14087 0 13837 0 13737 0 13550 0 13412 0 13712 - │ │ <--- 'amsterdam' --- 'boston' --- 'los angeles' --- 'new york' --- 'paris' --- 'rome' --- 'san francisco' --- 'seattle' --- 'washington dc' - │ │ histogram(10)= 0 1387 1.2242e+05 1187 - │ │ <--- 0 ------------- 99 - │ ├── cost: 150016.01 - │ ├── key: (1,2) - │ ├── fd: (1,2)-->(3-10) - │ ├── prune: (1-10) - │ └── interesting orderings: (+2,+1) (+2,+4,+1) (+3,+5,+2,+1) (+8,+2,+1) (+4,+2,+1) - └── filters - └── revenue:10 > 90 [outer=(10), immutable, constraints=(/10: (/90 - ]; tight)] -(39 rows) - -Time: 4ms total (execution 3ms / network 1ms) -~~~ - -##### `OPT, TYPES` option - -To include cost and type details, use the `OPT, TYPES` option: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN (OPT, TYPES) SELECT * FROM rides WHERE revenue > 90 ORDER BY revenue ASC; -~~~ - -~~~ - info ----------------------------------------------------------------------------------------------------- ... - sort - ├── columns: id:1(uuid!null) city:2(varchar!null) vehicle_city:3(varchar) rider_id:4(uuid) vehicle_id:5(uuid) start_address:6(varchar) end_address:7(varchar) start_time:8(timestamp) end_time:9(timestamp) revenue:10(decimal!null) - ├── immutable - ├── stats: [rows=12316.644, distinct(10)=9.90909091, null(10)=0] - │ histogram(10)= 0 0 11130 1187 - │ <--- 90 ------- 99 - ├── cost: 156091.288 - ├── key: (1,2) - ├── fd: (1,2)-->(3-10) - ├── ordering: +10 - ├── prune: (1-9) - ├── interesting orderings: (+2,+1) (+2,+4,+1) (+3,+5,+2,+1) (+8,+2,+1) (+4,+2,+1) - └── select - ├── columns: id:1(uuid!null) city:2(varchar!null) vehicle_city:3(varchar) rider_id:4(uuid) vehicle_id:5(uuid) start_address:6(varchar) end_address:7(varchar) start_time:8(timestamp) end_time:9(timestamp) revenue:10(decimal!null) - ├── immutable - ├── stats: [rows=12316.644, distinct(10)=9.90909091, null(10)=0] - │ histogram(10)= 0 0 11130 1187 - │ <--- 90 ------- 99 - ├── cost: 151266.03 - ├── key: (1,2) - ├── fd: (1,2)-->(3-10) - ├── prune: (1-9) - ├── interesting orderings: (+2,+1) (+2,+4,+1) (+3,+5,+2,+1) (+8,+2,+1) (+4,+2,+1) - ├── scan rides - │ ├── columns: id:1(uuid!null) city:2(varchar!null) vehicle_city:3(varchar) rider_id:4(uuid) vehicle_id:5(uuid) start_address:6(varchar) end_address:7(varchar) start_time:8(timestamp) end_time:9(timestamp) revenue:10(decimal) - │ ├── stats: [rows=125000, distinct(1)=125000, null(1)=0, distinct(2)=9, null(2)=0, distinct(10)=100, null(10)=0] - │ │ histogram(1)= 0 12 612 12 612 12 612 - │ │ <--- '00064a9c-dc44-4915-8000-00000000000c' ----- '0162f166-e008-49b0-8000-0000000002a5' ----- '02834d26-fa3f-4ca0-8000-0000000004cb' ----- '03c85c24-c404-4720- - │ │ histogram(2)= 0 14512 0 13637 0 14512 0 14087 0 13837 0 13737 0 13550 0 13412 0 13712 - │ │ <--- 'amsterdam' --- 'boston' --- 'los angeles' --- 'new york' --- 'paris' --- 'rome' --- 'san francisco' --- 'seattle' --- 'washington dc' - │ │ histogram(10)= 0 1387 1.2242e+05 1187 - │ │ <--- 0 ------------- 99 - │ ├── cost: 150016.01 - │ ├── key: (1,2) - │ ├── fd: (1,2)-->(3-10) - │ ├── prune: (1-10) - │ └── interesting orderings: (+2,+1) (+2,+4,+1) (+3,+5,+2,+1) (+8,+2,+1) (+4,+2,+1) - └── filters - └── gt [type=bool, outer=(10), immutable, constraints=(/10: (/90 - ]; tight)] - ├── variable: revenue:10 [type=decimal] - └── const: 90 [type=decimal] -(41 rows) - -Time: 4ms total (execution 3ms / network 1ms) -~~~ - -##### `OPT, ENV` option - -To include all details used by the optimizer, including statistics, use the `OPT, ENV` option. - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN (OPT, ENV) SELECT * FROM rides WHERE revenue > 90 ORDER BY revenue ASC; -~~~ - -The output of `EXPLAIN (OPT, ENV)` is a URL with the data encoded in the fragment portion. Encoding the data makes it easier to share debugging information across different systems without encountering formatting issues. Opening the URL shows a page with the decoded data. The data is processed in the local browser session and is never sent out over the network. Keep in mind that if you are using any browser extensions, they may be able to access the data locally. - -~~~ - info ------------------------------------------------------------------ ... - https://cockroachdb.github.io/text/decode.html#eJzsm9Fum0gXx6_L ... -(1 row) - -Time: 32ms total (execution 32ms / network 0ms) -~~~ - -When you open the URL you should see the following output in your browser. - -~~~ --- Version: CockroachDB CCL - --- reorder_joins_limit has the default value: 8 --- enable_zigzag_join has the default value: on --- optimizer_use_histograms has the default value: on --- optimizer_use_multicol_stats has the default value: on --- locality_optimized_partitioned_index_scan has the default value: on --- distsql has the default value: auto --- vectorize has the default value: on - -CREATE TABLE public.rides ( - id UUID NOT NULL, - city VARCHAR NOT NULL, - vehicle_city VARCHAR NULL, - rider_id UUID NULL, - vehicle_id UUID NULL, - start_address VARCHAR NULL, - end_address VARCHAR NULL, - start_time TIMESTAMP NULL, - end_time TIMESTAMP NULL, - revenue DECIMAL(10,2) NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - CONSTRAINT fk_city_ref_users FOREIGN KEY (city, rider_id) REFERENCES public.users(city, id), - CONSTRAINT fk_vehicle_city_ref_vehicles FOREIGN KEY (vehicle_city, vehicle_id) REFERENCES public.vehicles(city, id), - INDEX rides_auto_index_fk_city_ref_users (city ASC, rider_id ASC), - INDEX rides_auto_index_fk_vehicle_city_ref_vehicles (vehicle_city ASC, vehicle_id ASC), - INDEX rides_start_time_idx (start_time ASC) STORING (rider_id), - INDEX rides_rider_id_idx (rider_id ASC), - FAMILY "primary" (id, city, vehicle_city, rider_id, vehicle_id, start_address, end_address, start_time, end_time, revenue), - CONSTRAINT check_vehicle_city_city CHECK (vehicle_city = city) -); - -ALTER TABLE movr.public.rides INJECT STATISTICS '[ - { - "columns": [ - "city" - ], - "created_at": "2021-03-16 17:27:01.301903", - "distinct_count": 9, - "histo_col_type": "STRING", - "name": "__auto__", - "null_count": 0, - "row_count": 125000 - }, - { - "columns": [ - "id" - ], - "created_at": "2021-03-16 17:27:01.301903", - "distinct_count": 125617, - "histo_col_type": "UUID", - "name": "__auto__", - "null_count": 0, - "row_count": 125000 - }, - { - "columns": [ - "city", - "id" - ], - "created_at": "2021-03-16 17:27:01.301903", - "distinct_count": 124937, - "histo_col_type": "", - "name": "__auto__", - "null_count": 0, - "row_count": 125000 - }, - ... -]'; - -EXPLAIN (OPT, ENV) SELECT * FROM rides WHERE revenue > 90 ORDER BY revenue ASC; ----- -sort - └── select - ├── scan rides - └── filters - └── revenue > 90 -~~~ - -##### `OPT, MEMO` option - -The `MEMO` suboption prints a representation of the optimizer memo with the best plan. You can use the `MEMO` flag in combination with other flags. For example, `EXPLAIN (OPT, MEMO, VERBOSE)` prints the memo along with verbose output for the best plan. - - -~~~sql -EXPLAIN (OPT, MEMO) SELECT * FROM rides WHERE revenue > 90 ORDER BY revenue ASC; -~~~ - -~~~ - info ----------------------------------------------------------------------------------------------------- ... - memo (optimized, ~5KB, required=[presentation: info:13]) - ├── G1: (explain G2 [presentation: id:1,city:2,vehicle_city:3,rider_id:4,vehicle_id:5,start_address:6,end_address:7,start_time:8,end_time:9,revenue:10] [ordering: +10]) - │ └── [presentation: info:13] - │ ├── best: (explain G2="[presentation: id:1,city:2,vehicle_city:3,rider_id:4,vehicle_id:5,start_address:6,end_address:7,start_time:8,end_time:9,revenue:10] [ordering: +10]" [presentation: id:1,city:2,vehicle_city:3,rider_id:4,vehicle_id:5,start_address:6,end_address:7,start_time:8,end_time:9,revenue:10] [ordering: +10]) - │ └── cost: 2939.68 - ├── G2: (select G3 G4) - │ ├── [presentation: id:1,city:2,vehicle_city:3,rider_id:4,vehicle_id:5,start_address:6,end_address:7,start_time:8,end_time:9,revenue:10] [ordering: +10] - │ │ ├── best: (sort G2) - │ │ └── cost: 2939.66 - │ └── [] - │ ├── best: (select G3 G4) - │ └── cost: 2883.30 - ├── G3: (scan rides,cols=(1-10)) - │ ├── [ordering: +10] - │ │ ├── best: (sort G3) - │ │ └── cost: 3551.50 - │ └── [] - │ ├── best: (scan rides,cols=(1-10)) - │ └── cost: 2863.02 - ├── G4: (filters G5) - ├── G5: (gt G6 G7) - ├── G6: (variable revenue) - └── G7: (const 90) - sort - └── select - ├── scan rides - └── filters - └── revenue > 90 -(28 rows) - - -Time: 2ms total (execution 2ms / network 1ms) -~~~ - - -#### `VEC` option - -To view details about the [vectorized execution plan](vectorized-execution.html#how-vectorized-execution-works) for the query, use the `VEC` option. - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN (VEC) SELECT * FROM rides WHERE revenue > 90 ORDER BY revenue ASC; -~~~ - -The output shows the different internal functions that will be used to process each batch of column-oriented data. - -~~~ - info ------------------------------------------------- - │ - └ Node 1 - └ *colexec.sortOp - └ *colexecsel.selGTDecimalDecimalConstOp - └ *colfetcher.ColBatchScan -(5 rows) - -Time: 1ms total (execution 1ms / network 0ms) -~~~ - -#### `DISTSQL` option - -To view a physical statement plan that provides high level information about how a query will be executed, use the `DISTSQL` option. For more information about distributed SQL queries, see the [DistSQL section of our SQL layer architecture](architecture/sql-layer.html#distsql). - -{% include {{ page.version.version }}/sql/physical-plan-url.md %} - -For example, the following `EXPLAIN (DISTSQL)` statement generates a physical plan for a simple query against the [TPC-H database](http://www.tpc.org/tpch/) loaded to a 3-node CockroachDB cluster: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN (DISTSQL) SELECT l_shipmode, AVG(l_extendedprice) FROM lineitem GROUP BY l_shipmode; -~~~ - -The output of `EXPLAIN (DISTSQL)` is a URL for a graphical diagram that displays the processors and operations that make up the physical statement plan. For details about the physical statement plan, see [DistSQL plan diagram](explain-analyze.html#distsql-plan-diagram). - -~~~ - automatic | url ------------+---------------------------------------------- - true | https://cockroachdb.github.io/distsqlplan ... -~~~ - -To view the [DistSQL plan diagram](explain-analyze.html#distsql-plan-diagram), open the URL. You should see the following: - -EXPLAIN (DISTSQL) - -To include the data types of the input columns in the physical plan, use `EXPLAIN(DISTSQL, TYPES)`: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN (DISTSQL, TYPES) SELECT l_shipmode, AVG(l_extendedprice) FROM lineitem GROUP BY l_shipmode; -~~~ - -~~~ - automatic | url ------------+---------------------------------------------- - true | https://cockroachdb.github.io/distsqlplan ... -~~~ - -Open the URL. You should see the following: - -EXPLAIN (DISTSQL) - -### Find the indexes and key ranges a query uses - -You can use `EXPLAIN` to understand which indexes and key ranges queries use, which can help you ensure a query isn't performing a full table scan. - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE kv (k INT PRIMARY KEY, v INT); -~~~ - -Because column `v` is not indexed, queries filtering on it alone scan the entire table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM kv WHERE v BETWEEN 4 AND 5; -~~~ - -~~~ - info ------------------------------------ - distribution: full - vectorized: true - - • filter - │ filter: (v >= 4) AND (v <= 5) - │ - └── • scan - missing stats - table: kv@kv_pkey - spans: FULL SCAN -(10 rows) - -Time: 50ms total (execution 50ms / network 0ms) -~~~ - -You can disable statement plans that perform full table scans with the `disallow_full_table_scans` [session variable](set-vars.html). - -When `disallow_full_table_scans=on`, attempting to execute a query with a plan that includes a full table scan will return an error: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SET disallow_full_table_scans=on; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM kv WHERE v BETWEEN 4 AND 5; -~~~ - -~~~ -ERROR: query `SELECT * FROM kv WHERE v BETWEEN 4 AND 5` contains a full table/index scan which is explicitly disallowed -SQLSTATE: P0003 -HINT: try overriding the `disallow_full_table_scans` cluster/session setting -~~~ - -If there were an index on `v`, CockroachDB would be able to avoid scanning the entire table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE INDEX v ON kv (v); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM kv WHERE v BETWEEN 4 AND 5; -~~~ - -~~~ - info --------------------------------------------------------------------------------- - distribution: local - vectorized: true - - • scan - estimated row count: 1 (100% of the table; stats collected 11 seconds ago) - table: kv@v - spans: [/4 - /5] -(7 rows) - -Time: 1ms total (execution 1ms / network 0ms) -~~~ - -Now only part of the index `v` is getting scanned, specifically the key range starting at (and including) 4 and stopping before 6. This statement plan is not distributed across nodes on the cluster. - -### Find out if a statement is using `SELECT FOR UPDATE` locking - -CockroachDB has support for ordering transactions by controlling concurrent access to one or more rows of a table using locks. `SELECT FOR UPDATE` locking can result in improved performance for contended operations. It applies to the following statements: - -- [`SELECT FOR UPDATE`](select-for-update.html) -- [`UPDATE`](update.html) - -Suppose you have a table of key-value pairs: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE IF NOT EXISTS kv (k INT PRIMARY KEY, v INT); -UPSERT INTO kv (k, v) VALUES (1, 5), (2, 10), (3, 15); -~~~ - -You can use `EXPLAIN` to determine whether the following `UPDATE` is using `SELECT FOR UPDATE` locking. - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN UPDATE kv SET v = 100 WHERE k = 1; -~~~ - -The following output contains a `locking strength` field, which means that `SELECT FOR UPDATE` locking is being used. If the `locking strength` field does not appear, the statement is not using `SELECT FOR UPDATE` locking. - -~~~ - info ------------------------------------------- - distribution: local - vectorized: true - - • update - │ table: kv - │ set: v - │ auto commit - │ - └── • render - │ - └── • scan - missing stats - table: kv@kv_pkey - spans: [/1 - /1] - locking strength: for update -(15 rows) - -Time: 1ms total (execution 1ms / network 0ms) -~~~ - -By default, `SELECT FOR UPDATE` locking is enabled for the initial row scan of `UPDATE` and `UPSERT` statements. To disable it, toggle the [`enable_implicit_select_for_update` session setting](show-vars.html#enable-implicit-select-for-update). - - - -## See also - -- [`ALTER TABLE`](alter-table.html) -- [`ALTER SEQUENCE`](alter-sequence.html) -- [`BACKUP`](backup.html) -- [`CANCEL JOB`](cancel-job.html) -- [`CREATE DATABASE`](create-database.html) -- [`CREATE STATISTICS`](create-statistics.html) -- [`DROP DATABASE`](drop-database.html) -- [`EXECUTE`](sql-grammar.html#execute_stmt) -- [`EXPLAIN ANALYZE`](explain-analyze.html) -- [`IMPORT`](import.html) -- [Indexes](indexes.html) -- [`INSERT`](insert.html) -- [`PAUSE JOB`](pause-job.html) -- [`RESET`](reset-vars.html) -- [`RESTORE`](restore.html) -- [`RESUME JOB`](resume-job.html) -- [`SELECT`](select-clause.html) -- [Selection Queries](selection-queries.html) -- [`SET`](set-vars.html) -- [`SET CLUSTER SETTING`](set-cluster-setting.html) -- [`SHOW COLUMNS`](show-columns.html) -- [`UPDATE`](update.html) -- [`UPSERT`](upsert.html) diff --git a/src/current/v22.1/export-data-with-changefeeds.md b/src/current/v22.1/export-data-with-changefeeds.md deleted file mode 100644 index 6655ea6410c..00000000000 --- a/src/current/v22.1/export-data-with-changefeeds.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: Export Data with Changefeeds -summary: Use changefeeds to export table data from CockroachDB -toc: true -docs_area: stream_data ---- - -{% include_cached new-in.html version="v22.1" %} When you create an {{ site.data.products.enterprise }} changefeed, you can include the [`initial_scan = 'only'`](create-changefeed.html#initial-scan) option to specify that the changefeed should only complete a table scan. The changefeed emits messages for the table scan and then the job completes with a `succeeded` status. As a result, you can create a changefeed with `initial_scan = 'only'` to [export](export.html) data out of your database. - -The benefits of using changefeeds for this function compared to an export, include: - -- Changefeeds are jobs, which can be [paused](pause-job.html), [resumed](resume-job.html), and [cancelled](cancel-job.html). -- There is observability into a changefeed job using [`SHOW CHANGEFEED JOBS`](show-jobs.html#show-changefeed-jobs) and the [Changefeeds Dashboard](ui-cdc-dashboard.html) in the DB Console. -- [Changefeed sinks](changefeed-sinks.html) provide additional endpoints to send your data. -- You can use the [`format=csv`](create-changefeed.html#format) option with `initial_scan= 'only'` to emit messages in CSV format. - -Although this option offers an alternative way to export data out of your database, it is necessary to consider the following when you use [`CREATE CHANGEFEED`](create-changefeed.html) instead of [`EXPORT`](export.html): - -- Changefeeds do not offer any [filtering capabilities](export.html#export-using-a-select-statement). -- Changefeeds can emit [duplicate messages](changefeed-messages.html#ordering-guarantees). - -{{site.data.alerts.callout_info}} -{% include {{ page.version.version }}/cdc/initial-scan-limit-alter-changefeed.md %} -{{site.data.alerts.end}} - -To create a changefeed that will only complete an initial scan of a table(s), run the following: - -~~~ sql -CREATE CHANGEFEED FOR TABLE movr.users INTO '{scheme}://{host}:{port}?{query_parameters}' WITH initial_scan = 'only', format=csv; -~~~ - -The job will return a job ID once it has started. You can use `SHOW CHANGEFEED JOBS` to check on the status: - -~~~ sql -SHOW CHANGEFEED JOB {job ID}; -~~~ - -When the scan has completed you will find the output shows `succeeded` in the `status` field. - -## See also - -- [Changefeed Messages](changefeed-messages.html) -- [`CREATE CHANGEFEED`](create-changefeed.html) \ No newline at end of file diff --git a/src/current/v22.1/export-spatial-data.md b/src/current/v22.1/export-spatial-data.md deleted file mode 100644 index 96cfc0e170c..00000000000 --- a/src/current/v22.1/export-spatial-data.md +++ /dev/null @@ -1,129 +0,0 @@ ---- -title: Export Spatial Data -summary: Learn how to export spatial data from CockroachDB into various formats. -toc: true -docs_area: migrate ---- - - CockroachDB supports efficiently storing and querying spatial data. - -This page has instructions for exporting spatial data from CockroachDB and converting it to other spatial formats using the [`ogr2ogr`](https://gdal.org/programs/ogr2ogr.html) command. - -{% include {{page.version.version}}/spatial/ogr2ogr-supported-version.md %} - -## Step 1. Export data to CSV - -First, use the [`EXPORT`](export.html) statement to export your data to a CSV file. - -In the example statement below, we export the tornadoes database used in [Working with spatial data](spatial-data.html). - -The statement will place the CSV file in the node's [store directory](cockroach-start.html#store), in a subdirectory named `extern/tornadoes`. The file's name is automatically generated, and will be displayed as output in the [SQL shell](cockroach-sql.html). - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPORT INTO CSV 'nodelocal://self/tornadoes' WITH nullas = '' FROM SELECT * from "1950-2018-torn-initpoint"; -~~~ - -~~~ - filename | rows | bytes ---------------------------------------------------+-------+----------- - export16467a35d30d25700000000000000001-n1.0.csv | 63645 | 16557064 -(1 row) -~~~ - -{{site.data.alerts.callout_info}} -This example uses local file storage. For more information about other locations where you can export your data (such as cloud storage), see [`EXPORT`](export.html). -{{site.data.alerts.end}} - -## Step 2. Combine multiple CSV files into one, as needed - -You should now have one or more CSV files in the `extern/tornadoes` subdirectory of your node's [store directory](cockroach-start.html#store). Depending on the size of the data set, there may be more than one CSV file. - -To combine multiple CSVs into one file: - -1. Open the CSV file where you will be storing the combined output in a text editor. You will need to manually add the CSV header columns to that file so that the `ogr2ogr` output we generate below will have the proper column names. Start by running the statement below on the table you are exporting to get the necessary column names: - - {% include_cached copy-clipboard.html %} - ~~~ sql - WITH x AS (SHOW COLUMNS FROM "1950-2018-torn-initpoint") SELECT string_agg(column_name, ',') FROM x; - ~~~ - - ~~~ - string_agg - ------------------------------------------------------------------------------------------------------ - gid,om,yr,mo,dy,date,time,tz,st,stf,stn,mag,inj,fat,loss,closs,slat,slon,elat,elon,len,wid,fc,geom - ~~~ - -2. Add the column names output above to your target output CSV file (e.g., `tornadoes.csv`) as header columns. For the tornadoes database, they should look like the following: - - ~~~ - gid, om, yr, mo, dy, date, time, tz, st, stf, stn, mag, inj, fat, loss, closs, slat, slon, elat, elon, len, wid, fc, geom - ~~~ - -2. Concatenate the non-header data from all of the exported CSV files, and append the output to the target CSV file as shown below. The node's store directory on this machine is `/tmp/node0`. - - {% include_cached copy-clipboard.html %} - ~~~ shell - cat /tmp/node0/extern/tornadoes/*.csv >> tornadoes.csv - ~~~ - -## Step 3. Convert CSV to other formats using `ogr2ogr` - -Now that you have your data in CSV format, you can convert it to other spatial formats using [`ogr2ogr`](https://gdal.org/programs/ogr2ogr.html). - -For example, to convert the data to SQL, run the following command: - -{% include_cached copy-clipboard.html %} -~~~ shell -ogr2ogr -f PGDUMP tornadoes.sql -lco LAUNDER=NO -lco DROP_TABLE=OFF -oo GEOM_POSSIBLE_NAMES=geom -oo KEEP_GEOM_COLUMNS=off tornadoes.csv -~~~ - -Note that the options `-oo GEOM_POSSIBLE_NAMES= -oo KEEP_GEOM_COLUMNS=off` are required no matter what output format you are converting into. - -For more information about the formats supported by `ogr2ogr`, see the [`ogr2ogr` documentation](https://gdal.org/programs/ogr2ogr.html). - -{% include {{page.version.version}}/spatial/ogr2ogr-supported-version.md %} - -Finally, note that SQL type information is lost in the conversion to CSV, such that the `tornadoes.sql` file output by the `ogr2ogr` command above lists every non-geometry field as a [`VARCHAR`](string.html). - -This can be addressed in one of the following ways: - -- Modify the data definitions in the SQL output file to use the correct types. - -- Run [`ALTER TYPE`](alter-type.html) statements to restore the data's SQL types after loading this data into another database (including another CockroachDB instance). - -## See also - -- [`EXPORT`](export.html) -- [Migrate from Shapefiles](migrate-from-shapefiles.html) -- [Migrate from GeoJSON](migrate-from-geojson.html) -- [Migrate from GeoPackage](migrate-from-geopackage.html) -- [Migrate from OpenStreetMap](migrate-from-openstreetmap.html) -- [Spatial features](spatial-features.html) -- [Spatial indexes](spatial-indexes.html) -- [Working with Spatial Data](spatial-data.html) -- [Spatial and GIS Glossary of Terms](spatial-glossary.html) -- [Known Limitations](known-limitations.html#spatial-support-limitations) -- [Spatial functions](functions-and-operators.html#spatial-functions) -- [POINT](point.html) -- [LINESTRING](linestring.html) -- [POLYGON](polygon.html) -- [MULTIPOINT](multipoint.html) -- [MULTILINESTRING](multilinestring.html) -- [MULTIPOLYGON](multipolygon.html) -- [GEOMETRYCOLLECTION](geometrycollection.html) -- [Well known text](well-known-text.html) -- [Well known binary](well-known-binary.html) -- [GeoJSON](geojson.html) -- [SRID 4326 - longitude and latitude](srid-4326.html) -- [`ST_Contains`](st_contains.html) -- [`ST_ConvexHull`](st_convexhull.html) -- [`ST_CoveredBy`](st_coveredby.html) -- [`ST_Covers`](st_covers.html) -- [`ST_Disjoint`](st_disjoint.html) -- [`ST_Equals`](st_equals.html) -- [`ST_Intersects`](st_intersects.html) -- [`ST_Overlaps`](st_overlaps.html) -- [`ST_Touches`](st_touches.html) -- [`ST_Union`](st_union.html) -- [`ST_Within`](st_within.html) diff --git a/src/current/v22.1/export.md b/src/current/v22.1/export.md deleted file mode 100644 index 1dc20bbb712..00000000000 --- a/src/current/v22.1/export.md +++ /dev/null @@ -1,283 +0,0 @@ ---- -title: EXPORT -summary: Export tabular data from a CockroachDB cluster in CSV format. -toc: true -docs_area: reference.sql ---- - -The `EXPORT` [statement](sql-statements.html) exports tabular data or the results of arbitrary `SELECT` statements to the following: - -- CSV files -- Parquet files - -Using the [CockroachDB distributed execution engine](architecture/sql-layer.html#distsql), `EXPORT` parallelizes file creation across all nodes in the cluster, making it possible to quickly get large sets of data out of CockroachDB in a format that can be ingested by downstream systems. - -If you do not need distributed exports, you can [export tabular data in CSV format](#non-distributed-export-using-the-sql-client). - -{{site.data.alerts.callout_info}} -`EXPORT` no longer requires an {{ site.data.products.enterprise }} license. -{{site.data.alerts.end}} - -## Cancelling export - -After the export has been initiated, you can cancel it with [`CANCEL QUERY`](cancel-query.html). - -## Synopsis - -
    {% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/export.html %}
    - -{{site.data.alerts.callout_info}} -The `EXPORT` statement cannot be used within a [transaction](transactions.html). -{{site.data.alerts.end}} - -## Required privileges - - The user must have the `SELECT` [privilege](security-reference/authorization.html#managing-privileges) on the table being exported, unless the [destination URI requires `admin` privileges](import.html#source-privileges). - - {% include {{ page.version.version }}/misc/s3-compatible-warning.md %} - -## Parameters - - Parameter | Description ------------|------------- - `file_location` | Specify the [URL of the file location](#export-file-url) where you want to store the exported data.

    Note: It is best practice to use a unique destination for each export, to avoid mixing files from different exports in one directory. - `opt_with_options` | Control your export's behavior with [these options](#export-options). - `select_stmt` | Specify the query whose result you want to export. - `table_name` | Specify the name of the table you want to export. - -### Export file URL - -You can specify the base directory where you want to store the exported files. CockroachDB will create the export file(s) in the specified directory with programmatically generated names (e.g., `exportabc123-n1.1.csv`, `exportabc123-n1.2.csv`, `exportabc123-n2.1.csv`, ...). Each export should use a unique destination directory to avoid collision with other exports. - -The `EXPORT` command [returns](#success-responses) the list of files to which the data was exported. You may wish to record these for use in subsequent imports. - -{{site.data.alerts.callout_info}} -A hexadecimal hash code (`abc123...` in the file names) uniquely identifies each export **run**; files sharing the same hash are part of the same export. If you see multiple hash codes within a single destination directory, then the directory contains multiple exports, which will likely cause confusion (duplication) on import. We recommend that you manually clean up the directory, to ensure that it contains only a single export run. -{{site.data.alerts.end}} - -For more information, see the following: - -- [Use Cloud Storage for Bulk Operations](use-cloud-storage-for-bulk-operations.html) -- [Use a Local File Server for Bulk Operations](use-a-local-file-server-for-bulk-operations.html) - -### Export options - -You can control the [`EXPORT`](export.html) process's behavior using any of the following key-value pairs as a `kv_option`. - -Key |
    Context
    | Value | ---------------------+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------- -`delimiter` | `CSV DATA` | The ASCII character that delimits columns in your rows. If not using comma as your column delimiter, you can specify another ASCII character as the delimiter. **Default:** `,`.

    To use tab-delimited values: `WITH delimiter = e'\t'`

    See the [example](#export-a-table-into-csv). -`nullas` | `CSV DATA`, `DELIMITED DATA` | The string that should be used to represent `NULL` values. To avoid collisions, it is important to pick `nullas` values that do not appear in the exported data.

    To use empty columns as `NULL`: `WITH nullas = ''`

    See the [example](#export-a-table-into-csv). -`compression` | `CSV DATA`, `PARQUET DATA` | This instructs export to write compressed files to the specified destination.

    For `CSV DATA`, `gzip` compression is supported. For `PARQUET DATA`, both `gzip` and `snappy` compression is supported.

    See the [example](#export-compressed-files). -`chunk_rows` | `CSV DATA`, `PARQUET DATA` | The number of rows to be converted and written to a single file. **Default:** `100000`.
    For example, `WITH chunk_rows = '5000'` for a table with 10,000 rows would produce two files.

    **Note**:`EXPORT` will stop and upload the file whether the configured limit for `chunk_rows` or `chunk_size` is reached first. -`chunk_size` | `CSV DATA`, `PARQUET DATA` | A target size per file that you can specify during an `EXPORT`. Once the target size is reached, the file is uploaded before processing further rows. **Default:** `32MB`.
    For example, to set the size of each file uploaded during the export to 10MB: `WITH chunk_size = '10MB'`.

    **Note**:`EXPORT` will stop and upload the file whether the configured limit for `chunk_rows` or `chunk_size` is reached first. - -## Success responses - -Successful `EXPORT` returns a table of (perhaps multiple) files to which the data was exported: - -| Response | Description | -|-----------|-------------| -`filename` | The file to which the data was exported. -`rows` | The number of rows exported to this file. -`bytes` | The file size in bytes. - -## Parquet types - -CockroachDB types map to [Parquet types](https://github.com/apache/parquet-format/blob/master/LogicalTypes.md) as per the following: - -| CockroachDB Type | Parquet Type | Parquet Logical Type | ---------------------|--------------|---------------------- -| [`BOOL`](bool.html) | `BOOLEAN` | `nil` | -| [`STRING`](string.html) | byte array | `STRING` | -| [`COLLATE`](collate.html) | byte array | `STRING` | -| [`INET`](inet.html) | byte array | `STRING` | -| [`JSONB`](jsonb.html) | byte array | `JSON` | -| [`INT`](int.html) [`INT8`](int.html) | `INT64` | `nil` | -| [`INT2`](int.html) [`INT4`](int.html) | `INT32` | `nil` | -| [`FLOAT`](float.html) [`FLOAT8`](float.html) | `FLOAT64` | `nil` | -| [`FLOAT4`](float.html) | `FLOAT32` | `nil` | -| [`DECIMAL`](decimal.html) | byte array | `DECIMAL`
    Note: scale and precision data are preserved in the Parquet file. | -| [`UUID`](uuid.html) | `fixed_len_byte_array` | `nil` | -| [`BYTES`](bytes.html) | byte array | `nil` | -| [`BIT`](bit.html) | byte array | `nil` | -| [`ENUM`](enum.html) | byte array | `ENUM` | -| [`Box2D`](data-types.html#data-type-conversions-and-casts) | byte array | `STRING` | -| [`GEOGRAPHY`](data-types.html#data-type-conversions-and-casts) | byte array | `nil` | -| [`GEOMETRY`](data-types.html#data-type-conversions-and-casts) | byte array | `nil` | -| [`DATE`](date.html) | byte array | `STRING` | -| [`TIME`](time.html) | `INT64` | `TIME`
    Note: microseconds after midnight;
    exporting to microsecond precision. | -| [`TIMETZ`](time.html) | byte array | `STRING`
    Note: exporting to microsecond precision. | -| [`INTERVAL`](interval.html) | byte array | `STRING`
    Note: specifically represented as ISO8601. | -| [`TIMESTAMP`](timestamp.html) | byte array | `STRING`
    Note: exporting to microsecond precision. | -| [`TIMESTAMPTZ`](timestamp.html) | byte array | `STRING`
    Note: exporting to microsecond precision. | -| [`ARRAY`](array.html) | Encoded as a repeated field;
    each array value is encoded as per the preceding types. | `nil` | - -## Exports and `AS OF SYSTEM TIME` - -The [`AS OF SYSTEM TIME`](as-of-system-time.html) clause is not required in `EXPORT` statements, even though they are long-running queries. If it is omitted, `AS OF SYSTEM TIME` is implicitly set to the start of the statement's execution. The risk of [contention](performance-best-practices-overview.html#transaction-contention) is low because other transactions would need to have exactly the same transaction start time as the `EXPORT` statement's start time. - -## Examples - -{% include {{ page.version.version }}/backups/bulk-auth-options.md %} - -Each of these examples use the `bank` database and the `customers` table; `customer-export-data` is the demonstration path to which we're exporting our customers' data in this example. - -### Export a table into CSV - -This example uses the `delimiter` option to define the ASCII character that delimits columns in your rows: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPORT INTO CSV - 's3://{BUCKET NAME}/{customer-export-data}?AWS_ACCESS_KEY_ID={ACCESS KEY}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' - WITH delimiter = '|' FROM TABLE bank.customers; -~~~ - -This examples uses the `nullas` option to define the string that represents `NULL` values: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPORT INTO CSV - 's3://{BUCKET NAME}/{customer-export-data}?AWS_ACCESS_KEY_ID={ACCESS KEY}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' - WITH nullas = '' FROM TABLE bank.customers; -~~~ - -### Export a table into Parquet - -~~~ sql -> EXPORT INTO PARQUET - 's3://{BUCKET NAME}/{customer-export-data}?AWS_ACCESS_KEY_ID={ACCESS KEY}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' - FROM TABLE bank.customers; -~~~ - -### Export using a `SELECT` statement - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPORT INTO CSV - 's3://{BUCKET NAME}/{customer-export-data}?AWS_ACCESS_KEY_ID={ACCESS KEY}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' - FROM SELECT * FROM bank.customers WHERE id >= 100; -~~~ - -For more information, see [selection queries](selection-queries.html). - -### Non-distributed export using the SQL client - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql -e "SELECT * from bank.customers WHERE id>=100;" --format=csv > my.csv -~~~ - -For more information about the SQL client, see [`cockroach sql`](cockroach-sql.html). - -### Export compressed files - -`gzip` compression is supported for both `PARQUET` and `CSV` file formats: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPORT INTO CSV - 's3://{BUCKET NAME}/{customer-export-data}?AWS_ACCESS_KEY_ID={ACCESS KEY}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' - WITH compression = 'gzip' FROM TABLE bank.customers; -~~~ - -~~~ -filename | rows | bytes ----------------------------------------------------+------+-------- -export16808a04292505c80000000000000001-n1.0.csv.gz | 17 | 824 -(1 row) -~~~ - -`PARQUET` data also supports `snappy` compression: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPORT INTO PARQUET - 's3://{BUCKET NAME}/{customer-export-data}?AWS_ACCESS_KEY_ID={ACCESS KEY}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' - WITH compression = 'snappy' FROM TABLE bank.customers; -~~~ - -~~~ -filename | rows | bytes ------------------------------------------------------------+------+-------- -export16808a04292505c80000000000000001-n1.0.parquet.snappy | 17 | 824 -(1 row) -~~~ - -### Export tabular data with an S3 storage class - -{% include_cached new-in.html version="v22.1" %} To associate your export objects with a [specific storage class](use-cloud-storage-for-bulk-operations.html#amazon-s3-storage-classes) in your Amazon S3 bucket, use the `S3_STORAGE_CLASS` parameter with the class. For example, the following S3 connection URI specifies the `INTELLIGENT_TIERING` storage class: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPORT INTO CSV - 's3://{BUCKET NAME}/{customer-export-data}?AWS_ACCESS_KEY_ID={ACCESS KEY}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}&S3_STORAGE_CLASS=INTELLIGENT_TIERING' - WITH delimiter = '|' FROM TABLE bank.customers; -~~~ - -{% include {{ page.version.version }}/misc/storage-classes.md %} - - -### Export data out of CockroachDB {{ site.data.products.cloud }} - -Using `EXPORT` with [`userfile`](use-userfile-for-bulk-operations.html) is not recommended. You can either export data to [cloud storage](use-cloud-storage-for-bulk-operations.html) or to a local CSV file by using [`cockroach sql --execute`](../{{site.current_cloud_version}}/cockroach-sql.html#general): - -
    - - -
    - -
    - -The following example exports the `customers` table from the `bank` database into a local CSV file: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---url 'postgres://{username}:{password}@{host}:26257?sslmode=verify-full&sslrootcert={path/to/certs_dir}/cc-ca.crt' \ ---execute "SELECT * FROM bank.customers" --format=csv > /Users/{username}/{path/to/file}/customers.csv -~~~ - -
    - -
    - -The following example exports the `customers` table from the `bank` database into a cloud storage bucket in CSV format: - -~~~sql -EXPORT INTO CSV - 's3://{BUCKET NAME}/{customer-export-data}?AWS_ACCESS_KEY_ID={ACCESS KEY}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' - WITH delimiter = '|' FROM TABLE bank.customers; -~~~ - -
    - -### View a running export - -View running exports by using [`SHOW STATEMENTS`](show-statements.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW STATEMENTS; -~~~ - -### Cancel a running export - -Use [`SHOW STATEMENTS`](show-statements.html) to get a running export's `query_id`, which can be used to [cancel the export](cancel-query.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> CANCEL QUERY '14dacc1f9a781e3d0000000000000001'; -~~~ - -## Known limitation - -`EXPORT` may fail with an error if the SQL statements are incompatible with DistSQL. In that case, [export tabular data in CSV format](#non-distributed-export-using-the-sql-client). - -## See also - -- [`IMPORT`](import.html) -- [`IMPORT INTO`](import-into.html) -- [Use a Local File Server for Bulk Operations](use-a-local-file-server-for-bulk-operations.html) -- [Use Cloud Storage for Bulk Operations](use-cloud-storage-for-bulk-operations.html) diff --git a/src/current/v22.1/expression-indexes.md b/src/current/v22.1/expression-indexes.md deleted file mode 100644 index 0486dac9708..00000000000 --- a/src/current/v22.1/expression-indexes.md +++ /dev/null @@ -1,187 +0,0 @@ ---- -title: Expression Indexes -summary: Expression indexes apply a scalar or functional expression to one or more columns. -toc: true -keywords: gin, gin index, gin indexes, inverted index, inverted indexes, accelerated index, accelerated indexes -docs_area: develop ---- - - - -An _expression index_ is an index created by applying an [expression](scalar-expressions.html) to a column. For example, to facilitate fast, case insensitive lookups of user names you could create an index by applying the function `lower` to the `name` column: `CREATE INDEX users_name_idx ON users (lower(name))`. The value of the expression is stored only in the expression index, not in the primary family index. - -Both [standard indexes](create-index.html) and [GIN indexes](inverted-indexes.html) support expressions. You can use expressions in [unique indexes](create-index.html#unique-indexes) and [partial indexes](partial-indexes.html). - -You can reference multiple columns in an expression index. - -## Create an expression index - -To create an expression index, use the syntax: - -{% include_cached copy-clipboard.html %} -~~~sql -CREATE INDEX index_name ON table_name (expression(column_name)); -~~~ - -## View index expression - -To view the expression used to generate the index, run `SHOW CREATE TABLE`: - -{% include_cached copy-clipboard.html %} -~~~sql -> SHOW CREATE TABLE users; -~~~ - -~~~ - table_name | create_statement --------------+-------------------------------------------------------------------------------------- - users | CREATE TABLE public.users ( -... - | INDEX users_name_idx (lower(name:::STRING) ASC), -... - | ) -(1 row) -~~~ - - -## Examples - -### Simple examples - -Suppose you have a table with the following columns: -{% include_cached copy-clipboard.html %} -~~~sql -CREATE TABLE t (i INT, b BOOL, s STRING, j JSON); -~~~ - -The following examples illustrate how to create various types of expression indexes. - -A partial, multi-column index, where one column is defined with an expression: -{% include_cached copy-clipboard.html %} -~~~sql -CREATE INDEX ON t (lower(s), b) WHERE i > 0; -~~~ - -A unique, partial, multi-column index, where one column is defined with an expression: -{% include_cached copy-clipboard.html %} -~~~sql -CREATE UNIQUE INDEX ON t (lower(s), b) WHERE i > 0; -~~~ - -A GIN, partial, multi-column index, where one column is defined with an expression: -{% include_cached copy-clipboard.html %} -~~~sql -CREATE INVERTED INDEX ON t (lower(s), i, j) WHERE b; -~~~ - -### Use an expression to index a field in a `JSONB` column - -You can use an expression in an index definition to index a field in a JSON column. You can also use an expression to create a [GIN index](inverted-indexes.html) on a subset of the JSON column. - -Normally an index is used only if the cost of using the index is less than the cost of a full table scan. To disable that optimization, turn off statistics collection: - -~~~sql -> SET CLUSTER SETTING sql.stats.automatic_collection.enabled = false; -~~~ - -Create a table of three users with a JSON object in the `user_profile` column: - -{% include_cached copy-clipboard.html %} -~~~sql -> CREATE TABLE users ( - profile_id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - last_updated TIMESTAMP DEFAULT now(), - user_profile JSONB -); - -> INSERT INTO users (user_profile) VALUES - ('{"id": "d78236", "firstName": "Arthur", "lastName": "Read", "birthdate": "2010-01-25", "school": "PVPHS", "credits": 120, "sports": ["none"], "clubs": ["Robotics"]}'), - ('{"id": "f98112", "firstName": "Buster", "lastName": "Bunny", "birthdate": "2011-11-07", "school": "THS", "credits": 67, "sports": ["Gymnastics"], "clubs": ["Theater"]}'), - ('{"id": "t63512", "firstName": "Jane", "lastName": "Narayan", "birthdate": "2012-12-12", "school" : "Brooklyn Tech", "credits": 98, "sports": ["Track and Field"], "clubs": ["Chess"]}'); -~~~ - -When you perform a query that filters on the `user_profile->'birthdate'` column: - -{% include_cached copy-clipboard.html %} -~~~sql -> EXPLAIN SELECT jsonb_pretty(user_profile) FROM users WHERE user_profile->>'birthdate' = '2011-11-07'; -~~~ - -You can see that a full scan is performed: - -~~~ - info -------------------------------------------------------------------------------------------- - distribution: full - vectorized: true - - • render - │ estimated row count: 0 - │ - └── • filter - │ estimated row count: 0 - │ filter: (user_profile->'birthdate') = '2011-11-07' - │ - └── • index join - │ estimated row count: 3 - │ table: users@users_pkey - │ - └── • scan - missing stats - table: users@users_pkey - spans: FULL SCAN -~~~ - -To limit the number of rows scanned, create an expression index on the `birthdate` field: - -{% include_cached copy-clipboard.html %} -~~~sql -> CREATE INDEX timestamp_idx ON users (parse_timestamp(user_profile->>'birthdate')); -~~~ - -When you filter on the expression `parse_timestamp(user_profile->'birthdate')`, only the row matching the filter is scanned: - -{% include_cached copy-clipboard.html %} -~~~sql -> EXPLAIN SELECT jsonb_pretty(user_profile) FROM users WHERE parse_timestamp(user_profile->>'birthdate') = '2011-11-07'; -~~~ - -~~~ - info -------------------------------------------------------------------------------------- - distribution: local - vectorized: true - - • render - │ estimated row count: 1 - │ - └── • index join - │ estimated row count: 1 - │ table: users@users_pkey - │ - └── • scan - missing stats - table: users@timestamp_idx - spans: [/'2011-11-07 00:00:00' - /'2011-11-07 00:00:00'] -~~~ - -As shown in this example, for an expression index to be used to service a query, the query must constrain the **same exact expression** in its filter. - -## Known limitations - -Expression indexes have the following limitations: - -- The expression cannot reference columns outside the index's table. -- Functional expression output must be determined by the input arguments. For example, you can't use the [volatile function](functions-and-operators.html#function-volatility) `now()` to create an index because its output depends on more than just the function arguments. -- {% include {{page.version.version}}/sql/expression-indexes-cannot-reference-computed-columns.md %} -- {% include {{page.version.version}}/sql/expressions-as-on-conflict-targets.md %} - -## See also - -- [Computed Columns](computed-columns.html) -- [`CREATE INDEX`](create-index.html) -- [`DROP INDEX`](drop-index.html) -- [`RENAME INDEX`](rename-index.html) -- [`SHOW INDEX`](show-index.html) -- [Indexes](indexes.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v22.1/file-an-issue.md b/src/current/v22.1/file-an-issue.md deleted file mode 100644 index 579d0b6c34a..00000000000 --- a/src/current/v22.1/file-an-issue.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -title: File an Issue -summary: Learn how to file a GitHub issue with CockroachDB. -toc: false -docs_area: manage ---- - -If you've tried to [troubleshoot](troubleshooting-overview.html) an issue yourself, have [reached out for help](support-resources.html), and are still stumped, you can file an issue in GitHub. - -To file an issue in GitHub, we need the following information: - -1. A summary of the issue. - -2. The steps to reproduce the issue. - -3. The result you expected. - -4. The result that actually occurred. - -5. The first few lines of the log file from each node in the cluster in a timeframe as close as possible to reproducing the issue. On most Unix-based systems running with defaults, you can get this information using the following command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ grep -F '[config]' cockroach-data/logs/cockroach.log - ~~~ - {{site.data.alerts.callout_info}}You might need to replace cockroach-data/logs with the location of your logs.{{site.data.alerts.end}} - If the logs are not available, please include the output of `cockroach version` for each node in the cluster. - -### Template - -You can use this as a template for [filing an issue in GitHub](https://github.com/cockroachdb/cockroach/issues/new): - -~~~ - -## Summary - - - -## Steps to reproduce - -1. -2. -3. - -## Expected Result - - - -## Actual Result - - - -## Log files/version - -### Node 1 - - - -### Node 2 - - - -### Node 3 - - - -~~~ diff --git a/src/current/v22.1/float.md b/src/current/v22.1/float.md deleted file mode 100644 index 828f26fee4f..00000000000 --- a/src/current/v22.1/float.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -title: FLOAT -summary: The FLOAT data type stores inexact, floating-point numbers with up to 17 digits in total and at least one digit to the right of the decimal point. -toc: true -docs_area: reference.sql ---- - -CockroachDB supports various inexact, floating-point number [data types](data-types.html) with up to 17 digits of decimal precision. - -They are handled internally using the [standard double-precision (64-bit binary-encoded) IEEE754 format](https://en.wikipedia.org/wiki/IEEE_floating_point). - - -## Names and Aliases - -Name | Aliases ------|-------- -`FLOAT` | None -`REAL` | `FLOAT4` -`DOUBLE PRECISION` | `FLOAT8` - -## Syntax - -A constant value of type `FLOAT` can be entered as a [numeric literal](sql-constants.html#numeric-literals). -For example: `1.414` or `-1234`. - -The special IEEE754 values for positive infinity, negative infinity -and [NaN (Not-a-Number)](https://en.wikipedia.org/wiki/NaN) cannot be -entered using numeric literals directly and must be converted using an -[interpreted literal](sql-constants.html#interpreted-literals) or an -[explicit conversion](scalar-expressions.html#explicit-type-coercions) -from a string literal instead. - -The following values are recognized: - - Syntax | Value -----------------------------------------|------------------------------------------------ - `inf`, `infinity`, `+inf`, `+infinity` | +∞ - `-inf`, `-infinity` | -∞ - `nan` | [NaN (Not-a-Number)](https://en.wikipedia.org/wiki/NaN) - -For example: - -- `FLOAT '+Inf'` -- `'-Inf'::FLOAT` -- `CAST('NaN' AS FLOAT)` - -## Size - -A `FLOAT` column supports values up to 8 bytes in width, but the total storage size is likely to be larger due to CockroachDB metadata. - -## Examples - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE floats (a FLOAT PRIMARY KEY, b REAL, c DOUBLE PRECISION); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM floats; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden ---------------+-----------+-------------+----------------+-----------------------+-----------+------------ - a | FLOAT8 | false | NULL | | {primary} | false - b | FLOAT4 | true | NULL | | {primary} | false - c | FLOAT8 | true | NULL | | {primary} | false -(3 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO floats VALUES (1.012345678901, 2.01234567890123456789, CAST('+Inf' AS FLOAT)); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM floats; -~~~ - -~~~ -+----------------+--------------------+------+ -| a | b | c | -+----------------+--------------------+------+ -| 1.012345678901 | 2.0123456789012346 | +Inf | -+----------------+--------------------+------+ -(1 row) -# Note that the value in "b" has been limited to 17 digits. -~~~ - -## Supported casting and conversion - -`FLOAT` values can be [cast](data-types.html#data-type-conversions-and-casts) to any of the following data types: - -Type | Details ------|-------- -`INT` | Truncates decimal precision and requires values to be between -2^63 and 2^63-1 -`DECIMAL` | Causes an error to be reported if the value is NaN or +/- Inf. -`BOOL` | **0** converts to `false`; all other values convert to `true` -`STRING` | -- - -## See also - -[Data Types](data-types.html) diff --git a/src/current/v22.1/flyway.md b/src/current/v22.1/flyway.md deleted file mode 100644 index ee2b3db4484..00000000000 --- a/src/current/v22.1/flyway.md +++ /dev/null @@ -1,190 +0,0 @@ ---- -title: Migrate CockroachDB Schemas with Flyway -summary: This tutorial guides you through a series of simple database schema changes using Flyway, an open-source schema migration tool. -toc: true -docs_area: develop ---- - -This page guides you through a series of simple database schema changes using Flyway, an open-source schema migration tool. For detailed information about using Flyway, see the [Flyway documentation site](https://flywaydb.org/documentation/). - -## Watch the demo - -{% include_cached youtube.html video_id="xz4j5tU0ZRU" %} - -## Before You Begin - -Before you begin, do the following: - -1. [Install CockroachDB](install-cockroachdb.html) and [start a secure cluster](secure-a-cluster.html). -1. Download the latest version of the [Flyway command-line tool](https://flywaydb.org/documentation/commandline/#download-and-installation). CockroachDB v21.1 and later are fully compatible with Flyway versions 7.1.0 and greater. - -## Step 1. Configure Flyway connect to CockroachDB - -1. Extract the Flyway TAR file that you downloaded, and change directories to the extracted `flyway-x.x.x` folder. For example: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ tar -xvf flyway-commandline-6.4.2-macosx-x64.tar.gz - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cd flyway-6.4.2 - ~~~ - -1. Edit the `flyway-x.x.x/conf/flyway.conf` configuration file to specify the correct [connection parameters](connection-parameters.html) for your running, secure cluster. For example: - - {% include_cached copy-clipboard.html %} - ~~~ conf - ... - flyway.url=jdbc:postgresql://localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.max.key&sslcert=certs/client.max.crt - flyway.user=max - flyway.password=roach - ... - ~~~ - - {{site.data.alerts.callout_info}} - The SSL connection parameters in the connection URL must specify the full path to the certificates that you generated when you [started the secure cluster](secure-a-cluster.html). Also, the user that you specify (e.g., `max`) must also have [admin privileges](grant.html) on the database whose schema you want to change (e.g., `bank`). - {{site.data.alerts.end}} - -## Step 2. Create a schema migration - -Flyway executes SQL statements defined in `.sql` files located in the `flyway-x.x.x/sql` subdirectory. The schema changes defined in these `.sql` files are known as *migrations*. - -1. Create a `.sql` file with a name that follows the [Flyway naming conventions](https://flywaydb.org/documentation/migrations#naming). For example: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ touch sql/V1__Add_accounts_table.sql - ~~~ - -1. Edit the `.sql` file, adding a [`CREATE TABLE IF NOT EXISTS`](create-table.html) statement for the table that you want to create, and a simple [`INSERT`](insert.html) statement to initialize the table with some data. For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - /* Create accounts table */ - CREATE TABLE IF NOT EXISTS accounts ( - id INT PRIMARY KEY, - balance INT - ); - - /* Add initial data to accounts table */ - INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250); - ~~~ - -## Step 3. Execute a schema migration - -To execute the migration, run the following command from the top of the `flyway-x.x.x` directory: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ ./flyway migrate -~~~ - -You should see output similar to the following: - -~~~ -Database: jdbc:postgresql://localhost:26257/bank (PostgreSQL 9.5) -Successfully validated 1 migration (execution time 00:00.011s) -Creating Schema History table "bank"."flyway_schema_history" ... -Current version of schema "bank": << Empty Schema >> -Migrating schema "bank" to version 1 - Add accounts table [non-transactional] -Successfully applied 1 migration to schema "bank" (execution time 00:00.081s) -~~~ - -The schema `"bank"` is now on version 1. - -## Step 4. Add additional migrations - -Suppose that you want to change the primary key of the `accounts` table from a simple, incrementing [integer](int.html) (in this case, `id`) to an auto-generated [UUID](uuid.html), to follow some [CockroachDB best practices](performance-best-practices-overview.html#unique-id-best-practices). You can make these changes to the schema by creating and executing an additional migration: - -1. Create a second `.sql` schema migration file, and name the file following the [Flyway naming conventions](https://flywaydb.org/documentation/migrations#naming), to specify a new migration version. For example: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ touch sql/V2__Alter_accounts_pk.sql - ~~~ - - This file will create a version 2 of the `"bank"` schema. - -1. Edit the `V2__Alter_accounts_pk.sql` migration file, adding some SQL statements that will add a new column to the `accounts` table, and alter the table's primary key. For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - /* Add new UUID-typed column */ - ALTER TABLE accounts ADD COLUMN unique_id UUID NOT NULL DEFAULT gen_random_uuid(); - - /* Change primary key */ - ALTER TABLE accounts ALTER PRIMARY KEY USING COLUMNS (unique_id); - ~~~ - -1. Execute the migration by running the `flyway migrate` command from the top of the `flyway-x.x.x` directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ./flyway migrate - ~~~ - - You should see output similar to the following: - - ~~~ - Flyway Community Edition 6.4.2 by Redgate - Database: jdbc:postgresql://localhost:26257/bank (PostgreSQL 9.5) - Successfully validated 2 migrations (execution time 00:00.016s) - Current version of schema "bank": 1 - Migrating schema "bank" to version 2 - Alter accounts pk [non-transactional] - DB: primary key changes are finalized asynchronously; further schema changes on this table may be restricted until the job completes - Successfully applied 1 migration to schema "bank" (execution time 00:00.508s) - ~~~ - - The schema `"bank"` is now on version 2. - -1. Check the complete and pending Flyway migrations with the `flyway info` command: - - ~~~ shell - $ ./flyway info - ~~~ - - ~~~ - Flyway Community Edition 6.4.2 by Redgate - Database: jdbc:postgresql://localhost:26257/bank (PostgreSQL 9.5) - Schema version: 2 - - +-----------+---------+--------------------+------+---------------------+---------+ - | Category | Version | Description | Type | Installed On | State | - +-----------+---------+--------------------+------+---------------------+---------+ - | Versioned | 1 | Add accounts table | SQL | 2020-05-13 17:16:54 | Success | - | Versioned | 2 | Alter accounts pk | SQL | 2020-05-14 13:27:27 | Success | - +-----------+---------+--------------------+------+---------------------+---------+ - ~~~ - -## Flyway and Transactions - -When used with most databases, [Flyway wraps the statements in a migration within a single transaction](https://flywaydb.org/documentation/migrations#transactions). When used with CockroachDB, Flyway does *not* wrap schema migrations in transactions. [Transaction boundaries](transactions.html) are instead handled by CockroachDB. - -### Transaction retries - -When multiple, concurrent transactions or statements are issued to a single CockroachDB cluster, [transaction contention](performance-best-practices-overview.html#transaction-contention) can cause schema migrations to fail. In the event of transaction contention, CockroachDB returns a [`40001 SQLSTATE` (i.e., a serialization failure)](common-errors.html#restart-transaction), and Flyway automatically retries the migration. For more information about client-side transaction retries in CockroachDB, see [Transaction Retries](transactions.html#transaction-retries). - -### Transactional schema changes - -Support for [transactional schema changes](online-schema-changes.html) is limited in CockroachDB. As a result, if a migration with multiple schema changes fails at any point, the partial migration could leave the database schema in an incomplete state. If this happens, manual intervention will be required to determine the state of the schema, in addition to any possible fixes. - -Note that this limitation also applies to single [`ALTER TABLE`](alter-table.html) statements that include multiple schema changes (e.g., `ALTER TABLE ... ALTER COLUMN ... RENAME ..., ADD COLUMN ...`). - -## Report Issues with Flyway and CockroachDB - -If you run into problems, please file an issue on the [Flyway issue tracker](https://github.com/flyway/flyway/issues), including the following details about the environment where you encountered the issue: - -- CockroachDB version ([`cockroach version`](cockroach-version.html)) -- Flyway version -- Operating system -- Steps to reproduce the behavior - -## See Also - -+ [Flyway documentation](https://flywaydb.org/documentation/) -+ [Flyway issue tracker](https://github.com/flyway/flyway/issues) -+ [Client connection parameters](connection-parameters.html) -+ [Third-Party Database Tools](third-party-database-tools.html) -+ [Learn CockroachDB SQL](learn-cockroachdb-sql.html) diff --git a/src/current/v22.1/follower-reads.md b/src/current/v22.1/follower-reads.md deleted file mode 100644 index 853454e3285..00000000000 --- a/src/current/v22.1/follower-reads.md +++ /dev/null @@ -1,264 +0,0 @@ ---- -title: Follower Reads -summary: To reduce latency for read queries, you can choose to have the closest replica serve the request. -toc: true -docs_area: develop ---- - -A _follower read_ is performed on the [nearest replica](architecture/overview.html#architecture-replica) relative to the SQL gateway that is executing the SQL statement regardless of the replica's [leaseholder](architecture/overview.html#architecture-leaseholder) status. Using the nearest replica can reduce read latencies and increase throughput. Applications in [multi-region deployments](topology-follower-reads.html) especially can use follower reads to get improved performance. - -{% include enterprise-feature.md %} - -## Follower read types - -A _strong follower read_ is a read taken from a [Global](global-tables.html) table. Such tables are optimized for low-latency reads from every region in the database. The tradeoff is that writes will incur higher latencies from any given region, since writes have to be replicated across every region to make the global low-latency reads possible. For more information about global tables, including troubleshooting information, see [Global Tables](global-tables.html). - -A [_stale follower read_](#stale-follower-reads) is a historical read taken from the nearest replica. You should use stale follower reads only when your application can tolerate reading stale data, since the results of stale follower reads may not reflect the latest writes against the tables you are querying. - -The following table summarizes the read types and how to accomplish them. - - | Strong Reads | Stale Reads ------|-----------|---------------------------------------------------------------- -Only From Leaseholder | `SELECT` | N/A -From Nearest Replica | `SELECT` on `GLOBAL` table | `SELECT` with `AS OF SYSTEM TIME ` - -## Stale follower reads - -CockroachDB provides the following types of stale follower reads: - -- _Exact staleness read_: A historical read as of a static, user-provided timestamp. See [Exact staleness reads](#exact-staleness-reads). -- _Bounded staleness read_: A historical read that uses a dynamic, system-determined timestamp to minimize staleness while being more tolerant to replication lag than an exact staleness read. See [Bounded staleness reads](#bounded-staleness-reads). - -{{site.data.alerts.callout_info}} -Stale follower reads are always served from a consistent view; CockroachDB does not allow a historical read to view uncommitted data. -{{site.data.alerts.end}} - -### Exact staleness reads - -An _exact staleness read_ is a historical read as of a static, user-provided timestamp. - -For requirements and limitations, see [Exact staleness reads and long-running writes](#exact-staleness-reads-and-long-running-writes) and [Exact staleness read timestamps must be far enough in the past](#exact-staleness-read-timestamps-must-be-far-enough-in-the-past). - -#### When to use exact staleness reads - -Use exact staleness follower reads when you: - -- Need multi-statement reads inside [transactions](transactions.html). -- Can tolerate reading older data (at least 4.8 seconds in the past), to reduce the chance that the historical query timestamp is not quite old enough to prevent blocking on a conflicting write and thus being able to be served by a local replica. -- Do not need the increase in availability provided by [bounded staleness reads](#bounded-staleness-reads) in the face of [network partitions](cluster-setup-troubleshooting.html#network-partition) or other failures. -- Need a read that is slightly cheaper to perform than a [bounded staleness read](#bounded-staleness-reads), because exact staleness reads don't need to dynamically compute the query timestamp. - -#### Run queries that use exact staleness follower reads - -Any [`SELECT` statement](select-clause.html) with an appropriate [`AS OF SYSTEM TIME`](as-of-system-time.html) value is an exact staleness follower read. You can use the convenience [function](functions-and-operators.html#date-and-time-functions) `follower_read_timestamp()`, which returns a [`TIMESTAMP`](timestamp.html) that provides a high probability of being served locally while not [blocking on conflicting writes](#exact-staleness-reads-and-long-running-writes). - -Use this function in an `AS OF SYSTEM TIME` statement as follows: - -``` sql -SELECT ... FROM ... AS OF SYSTEM TIME follower_read_timestamp(); -``` -#### Exact staleness follower reads demo - -The following video describes and demonstrates [exact staleness](#exact-staleness-reads) follower reads. - -{% include_cached youtube.html video_id="V--skgN_JMo" %} - -#### Exact staleness follower reads in read-only transactions - -You can set the [`AS OF SYSTEM TIME`](as-of-system-time.html) clause's value for all operations in a read-only [transaction](transactions.html): - -```sql -BEGIN; - -SET TRANSACTION AS OF SYSTEM TIME follower_read_timestamp(); -SELECT ... -SELECT ... - -COMMIT; -``` - -Follower reads are "read-only" operations; you **cannot** use them in read-write transactions. - -{{site.data.alerts.callout_success}} -Using the [`SET TRANSACTION`](set-transaction.html#use-the-as-of-system-time-option) statement as shown in the preceding example will make it easier to use exact staleness follower reads from [drivers and ORMs](install-client-drivers.html). - -To set `AS OF SYSTEM TIME follower_read_timestamp()` on all implicit and explicit read-only transactions by default, use one of the following options: - -- Set the `default_transaction_use_follower_reads` [session variable](set-vars.html) to `on`. When `default_transaction_use_follower_reads=on`, all read-only transactions use exact staleness follower reads. -- Execute the `SET SESSION CHARACTERISTICS AS TRANSACTION AS OF SYSTEM TIME follower_read_timestamp()` [SQL statement](set-vars.html#special-syntax-cases). This has the same effect as setting the session variable as shown above. - -You can set `default_transaction_use_follower_reads` on a per-role basis; for instructions, see [Set default session variable values for a role](alter-role.html#set-default-session-variable-values-for-a-role). -{{site.data.alerts.end}} - -### Bounded staleness reads - -A _bounded staleness read_ is a historical read that uses a dynamic, system-determined timestamp to minimize staleness while being more tolerant to replication lag than an exact staleness read. Bounded staleness reads also help increase system availability, since they provide the ability to serve reads from local replicas even in the presence of network partitions or other failures that prevent the SQL gateway from communicating with the leaseholder. - -#### When to use bounded staleness reads - -Use bounded staleness follower reads when you: - -- Need minimally stale reads from the nearest replica without blocking on [conflicting transactions](transactions.html#transaction-contention). This is possible because the historical timestamp is chosen dynamically and the least stale timestamp that can be served locally without blocking is used. -- Can confine the read to a single statement that meets the [bounded staleness limitations](#bounded-staleness-read-limitations). -- Need higher availability than is provided by [exact staleness reads](#exact-staleness-reads). Specifically, what we mean by availability in this context is: - - The ability to serve a read with low latency from a local replica rather than a leaseholder. - - The ability to serve reads from local replicas even in the presence of a network partition or other failure event that prevents the SQL gateway from communicating with the leaseholder. Once a replica begins serving follower reads at a timestamp, it will always continue to serve follower reads at that timestamp. Even if the replica becomes completely partitioned away from the rest of its range, it will continue to stay available for (increasingly) stale reads. - -#### Run queries that use bounded staleness follower reads - -To get a bounded staleness read, use one of the following built-in functions: - -Name | Description ----- | ----------- -`with_min_timestamp(TIMESTAMPTZ, [nearest_only])` | Defines a minimum [timestamp](timestamp.html) at which to perform the [bounded staleness read](follower-reads.html#bounded-staleness-reads). The actual timestamp of the read may be equal to or later than the provided timestamp, but cannot be before the provided timestamp. This is useful to request a read from nearby followers, if possible, while enforcing causality between an operation at some point in time and any dependent reads. This function accepts an optional `nearest_only` argument that will error if the reads cannot be serviced from a nearby replica. -`with_max_staleness(INTERVAL, [nearest_only])` | Defines a maximum staleness interval with which to perform the [bounded staleness read](follower-reads.html#bounded-staleness-reads). The timestamp of the read can be at most this stale with respect to the current time. This is useful to request a read from nearby followers, if possible, while placing some limit on how stale results can be. Note that `with_max_staleness(INTERVAL)` is equivalent to `with_min_timestamp(now() - INTERVAL)`. This function accepts an optional `nearest_only` argument that will error if the reads cannot be serviced from a nearby replica. - -This example performs a bounded staleness follower read against a [demo cluster](cockroach-demo.html) with the [MovR dataset](movr.html). - -1. Start the demo cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach demo - ~~~ - -1. Issue a single-statement point query to [select](selection-queries.html) a single row from a table at a historical [timestamp](timestamp.html) by passing the output of the `with_max_staleness()` [function](functions-and-operators.html) to the [`AS OF SYSTEM TIME`](as-of-system-time.html) clause: - - {% include_cached copy-clipboard.html %} - ~~~ sql - SELECT code FROM promo_codes AS OF SYSTEM TIME with_max_staleness('10s') where code = '0_explain_theory_something'; - ~~~ - - ~~~ - code - ------------------------------ - 0_explain_theory_something - (1 row) - ~~~ - - The query returns successfully. - - If it had failed with the following error message, you would need to [troubleshoot your query to ensure it meets the conditions required for bounded staleness reads](#bounded-staleness-read-limitations). - - ~~~ - ERROR: unimplemented: cannot use bounded staleness for queries that may touch more than one row or require an index join - SQLSTATE: 0A000 - HINT: You have attempted to use a feature that is not yet implemented. - See: https://go.crdb.dev/issue-v/67562/v21.2 - ~~~ - - You can verify using [`EXPLAIN`](explain.html) that the reason this query was able to perform a bounded staleness read is that it performed a point lookup from a single row: - - {% include_cached copy-clipboard.html %} - ~~~ sql - EXPLAIN SELECT code FROM promo_codes AS OF SYSTEM TIME with_max_staleness('10s') where code = '0_explain_theory_something'; - ~~~ - - ~~~ - info - -------------------------------------------------------------------------------- - distribution: local - vectorized: true - - • scan - estimated row count: 1 (0.10% of the table; stats collected 4 minutes ago) - table: promo_codes@primary - spans: [/'0_explain_theory_something' - /'0_explain_theory_something'] - (7 rows) - ~~~ - -### Verify that a cluster is performing follower reads - -To verify that a cluster is performing follower reads, go to the [Custom Chart Debug Page in the DB Console](ui-custom-chart-debug-page.html) and add the metric `follower_read.success_count` to the time-series graph. The number of follower reads performed by your cluster will be shown. - -### How stale follower reads work - -Each CockroachDB range tracks a property called its [_closed timestamp_](architecture/transaction-layer.html#closed-timestamps), which means that no new writes can ever be introduced at or below that timestamp. The closed timestamp is advanced continuously on the leaseholder, and lags the current time by some target interval. As the closed timestamp is advanced, notifications are sent to each follower. If a range receives a write at a timestamp less than or equal to its closed timestamp, the write is forced to change its timestamp, which might result in a [transaction retry error](transaction-retry-error-reference.html). - -With follower reads, any replica in a range can serve a read for a key as long as the time at which the operation is performed (i.e., the [`AS OF SYSTEM TIME`](as-of-system-time.html) value) is less than or equal to the range's closed timestamp. - -When a gateway node in a cluster receives a request to read a key with a sufficiently old [`AS OF SYSTEM TIME`](as-of-system-time.html) value, it forwards the request to the closest node that contains a replica of the data—whether it be a follower or the leaseholder. - -For further details, see [An Epic Read on Follower Reads](https://www.cockroachlabs.com/blog/follower-reads-stale-data/). - -### Limitations - -- [Exact staleness reads and long-running writes](#exact-staleness-reads-and-long-running-writes) -- [Exact staleness read timestamps must be far enough in the past](#exact-staleness-read-timestamps-must-be-far-enough-in-the-past) -- [Bounded staleness read limitations](#bounded-staleness-read-limitations) - -#### Exact staleness reads and long-running writes - -Long-running write transactions will create [write intents](architecture/transaction-layer.html#write-intents) with a timestamp near when the transaction began. When an exact staleness follower read encounters a write intent, it will often end up in a ["transaction wait queue"](architecture/transaction-layer.html#txnwaitqueue), waiting for the operation to complete; however, this runs counter to the benefit exact staleness reads provide. - -To counteract this, you can issue all follower reads in explicit [transactions set with `HIGH` priority](transactions.html#transaction-priorities): - -```sql -BEGIN PRIORITY HIGH AS OF SYSTEM TIME follower_read_timestamp(); -SELECT ... -SELECT ... -COMMIT; -``` - -#### Exact staleness read timestamps must be far enough in the past - -If an exact staleness read is not using an [`AS OF SYSTEM TIME`](as-of-system-time.html) value far enough in the past, CockroachDB cannot perform a follower read. Instead, the read must access the [leaseholder replica](architecture/overview.html#architecture-leaseholder). This adds network latency if the leaseholder is not the closest replica to the gateway node. Most users will [use the `follower_read_timestamp()` function](#run-queries-that-use-exact-staleness-follower-reads) to get a timestamp far enough in the past that there is a high probability of getting a follower read. - -#### Bounded staleness read limitations - -Bounded staleness reads have the following limitations: - -- They must be used in a [single-statement (aka implicit) transaction](transactions.html#individual-statements). -- They must read from a single row. -- They must not require an [index](indexes.html) [join](joins.html). In other words, the index used by the read query must be either a [primary](primary-key.html) [index](indexes.html), or some other index that covers the entire query by [`STORING`](create-index.html#store-columns) all columns. - -For example, let's look at a read query that cannot be served as a bounded staleness read. We will use a [demo cluster](cockroach-demo.html), which automatically loads the [MovR dataset](movr.html). - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach demo -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT code FROM promo_codes AS OF SYSTEM TIME with_max_staleness('10s') LIMIT 1; -ERROR: unimplemented: cannot use bounded staleness for queries that may touch more than one row or require an index join -SQLSTATE: 0A000 -HINT: You have attempted to use a feature that is not yet implemented. -See: https://go.crdb.dev/issue-v/67562/v21.2 -~~~ - -As noted by the error message, this query cannot be served as a bounded staleness read because in this case it would touch more than one row. Even though we used a [`LIMIT 1` clause](limit-offset.html), the query would still have to touch more than one row in order to filter out the additional results. - -We can verify that more than one row would be touched by issuing [`EXPLAIN`](explain.html) on the same query, but without the [`AS OF SYSTEM TIME`](as-of-system-time.html) clause: - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPLAIN SELECT code FROM promo_codes LIMIT 5; -~~~ - -~~~ - info -------------------------------------------------------------------------------- - distribution: full - vectorized: true - - • scan - estimated row count: 1 (0.10% of the table; stats collected 1 minute ago) - table: promo_codes@primary - spans: LIMITED SCAN - limit: 1 -(8 rows) -~~~ - -The output verifies that this query performs a scan of the primary [index](indexes.html) on the `promo_codes` table, which is why it cannot be used for a bounded staleness read. - -For an example showing how to successfully perform a bounded staleness read, see [Run queries that use bounded staleness follower reads](#run-queries-that-use-bounded-staleness-follower-reads). - -## See also - -- [Follower Reads Topology](topology-follower-reads.html) -- [Cluster Settings Overview](cluster-settings.html) -- [Load-Based Splitting](load-based-splitting.html) -- [Network Latency Page](ui-network-latency-page.html) -- [Enterprise Features](enterprise-licensing.html) diff --git a/src/current/v22.1/foreign-key.md b/src/current/v22.1/foreign-key.md deleted file mode 100644 index 1e21a3f0fb3..00000000000 --- a/src/current/v22.1/foreign-key.md +++ /dev/null @@ -1,921 +0,0 @@ ---- -title: Foreign Key Constraint -summary: The `FOREIGN KEY` constraint specifies a column can contain only values exactly matching existing values from the column it references. -toc: true -docs_area: reference.sql ---- - -A foreign key is a column (or combination of columns) in a table whose values must match values of a column in some other table. `FOREIGN KEY` constraints enforce [referential integrity](https://en.wikipedia.org/wiki/Referential_integrity), which essentially says that if column value A refers to column value B, then column value B must exist. - -For example, given an `orders` table and a `customers` table, if you create a column `orders.customer_id` that references the `customers.id` primary key: - -- Each value inserted or updated in `orders.customer_id` must exactly match a value in `customers.id`, or be `NULL`. -- Values in `customers.id` that are referenced by `orders.customer_id` cannot be deleted or updated, unless you have [cascading actions](#use-a-foreign-key-constraint-with-cascade). However, values of `customers.id` that are _not_ present in `orders.customer_id` can be deleted or updated. - -To learn more about the basics of foreign keys, watch the following video: - -{% include_cached youtube.html video_id="5kiMg7GXAsY" widescreen=true %} - -{{site.data.alerts.callout_success}} -To read more about how foreign keys work, see our [What is a Foreign Key? (With SQL Examples)](https://www.cockroachlabs.com/blog/what-is-a-foreign-key/) blog post. -{{site.data.alerts.end}} - -## Details - -### Rules for creating foreign keys - -**Foreign Key Columns** - -- Foreign key columns must use their referenced column's [type](data-types.html). -- A foreign key column cannot be a virtual [computed column](computed-columns.html), but it can be a stored computed column. -- A single column can have multiple foreign key constraints. For an example, see [Add multiple foreign key constraints to a single column](#add-multiple-foreign-key-constraints-to-a-single-column). -- A foreign key column can reference the [`crdb_region` column](set-locality.html#crdb_region) in [`REGIONAL BY ROW`](multiregion-overview.html#regional-by-row-tables) tables even if the `crdb_region` column is not explicitly part of a `UNIQUE` constraint. This is possible because `crdb_region` is implicitly included in every index on `REGIONAL BY ROW` tables as the partitioning key. This applies to whichever column is used as the partitioning column, in case a different name is used via `REGIONAL BY ROW AS`. - -**Referenced Columns** - -- Referenced columns must contain only unique sets of values. This means the `REFERENCES` clause must use exactly the same columns as a [`UNIQUE`](unique.html) or [`PRIMARY KEY`](primary-key.html) constraint on the referenced table. For example, the clause `REFERENCES tbl (C, D)` requires `tbl` to have either the constraint `UNIQUE (C, D)` or `PRIMARY KEY (C, D)`. The order of the columns in the foreign key definition does not need to match the order of the columns in the corresponding `UNIQUE` or `PRIMARY KEY` constraint. -- In the `REFERENCES` clause, if you specify a table but no columns, CockroachDB references the table's primary key. In these cases, the `FOREIGN KEY` constraint and the referenced table's primary key must contain the same number of columns. -- By default, referenced columns must be in the same database as the referencing foreign key column. To enable cross-database foreign key references, set the `sql.cross_db_fks.enabled` [cluster setting](cluster-settings.html) to `true`. - -### Null values - -Single-column foreign keys accept null values. - -Multiple-column (composite) foreign keys only accept null values in the following scenarios: - -- The write contains null values for all foreign key columns (if `MATCH FULL` is specified). -- The write contains null values for at least one foreign key column (if `MATCH SIMPLE` is specified). - -For more information about composite foreign keys, see the [composite foreign key matching](#composite-foreign-key-matching) section. - -Note that allowing null values in either your foreign key or referenced columns can degrade their referential integrity, since any key with a null value is never checked against the referenced table. To avoid this, you can use a [`NOT NULL` constraint](not-null.html) on foreign keys when [creating your tables](create-table.html). - -{{site.data.alerts.callout_info}} -A `NOT NULL` constraint cannot be added to existing tables. -{{site.data.alerts.end}} - -### Composite foreign key matching - -By default, composite foreign keys are matched using the `MATCH SIMPLE` algorithm (which is the same default as PostgreSQL). `MATCH FULL` is available if specified. You can specify both `MATCH FULL` and `MATCH SIMPLE`. - -All composite key matches defined prior to version 19.1 use the `MATCH SIMPLE` comparison method. If you had a composite foreign key constraint and have just upgraded to version 19.1, then please check that `MATCH SIMPLE` works for your schema and consider replacing that foreign key constraint with a `MATCH FULL` one. - -#### How it works - -For matching purposes, composite foreign keys can be in one of three states: - -- **Valid**: Keys that can be used for matching foreign key relationships. - -- **Invalid**: Keys that will not be used for matching (including for any cascading operations). - -- **Unacceptable**: Keys that cannot be inserted at all (an error is signalled). - -`MATCH SIMPLE` stipulates that: - -- **Valid** keys may not contain any null values. - -- **Invalid** keys contain one or more null values. - -- **Unacceptable** keys do not exist from the point of view of `MATCH SIMPLE`; all composite keys are acceptable. - -`MATCH FULL` stipulates that: - -- **Valid** keys may not contain any null values. - -- **Invalid** keys must have all null values. - -- **Unacceptable** keys have any combination of both null and non-null values. In other words, `MATCH FULL` requires that if any column of a composite key is `NULL`, then all columns of the key must be `NULL`. - -For examples showing how these key matching algorithms work, see [Match composite foreign keys with `MATCH SIMPLE` and `MATCH FULL`](#match-composite-foreign-keys-with-match-simple-and-match-full). - -{{site.data.alerts.callout_info}} -CockroachDB does not support `MATCH PARTIAL`. For more information, see issue [#20305](https://github.com/cockroachdb/cockroach/issues/20305). -{{site.data.alerts.end}} - -### Foreign key actions - -When you set a foreign key constraint, you can control what happens to the constrained column when the column it's referencing (the foreign key) is deleted or updated. - -Parameter | Description -----------|------------ -`ON DELETE NO ACTION` | _Default action._ If there are any existing references to the key being deleted, the transaction will fail at the end of the statement. The key can be updated, depending on the `ON UPDATE` action.

    Alias: `ON DELETE RESTRICT` -`ON UPDATE NO ACTION` | _Default action._ If there are any existing references to the key being updated, the transaction will fail at the end of the statement. The key can be deleted, depending on the `ON DELETE` action.

    Alias: `ON UPDATE RESTRICT` -`ON DELETE RESTRICT` / `ON UPDATE RESTRICT` | `RESTRICT` and `NO ACTION` are currently equivalent until options for deferring constraint checking are added. To set an existing foreign key action to `RESTRICT`, the foreign key constraint must be dropped and recreated. -`ON DELETE CASCADE` / `ON UPDATE CASCADE` | When a referenced foreign key is deleted or updated, all rows referencing that key are deleted or updated, respectively. If there are other alterations to the row, such as a `SET NULL` or `SET DEFAULT`, the delete will take precedence.

    Note that `CASCADE` does not list objects it drops or updates, so it should be used cautiously. -`ON DELETE SET NULL` / `ON UPDATE SET NULL` | When a referenced foreign key is deleted or updated, respectively, the columns of all rows referencing that key will be set to `NULL`. The column must allow `NULL` or this update will fail. -`ON DELETE SET DEFAULT` / `ON UPDATE SET DEFAULT` | When a referenced foreign key is deleted or updated, the columns of all rows referencing that key are set to the default value for that column.

    If the default value for the column is null, or if no default value is provided and the column does not have a [`NOT NULL`](not-null.html) constraint, this will have the same effect as `ON DELETE SET NULL` or `ON UPDATE SET NULL`. The default value must still conform with all other constraints, such as `UNIQUE`. - -{{site.data.alerts.callout_info}} - If a foreign key column has multiple constraints that reference the same column, the foreign key action that is specified by the first foreign key takes precedence. For an example, see [Add multiple foreign key constraints to a single column](#add-multiple-foreign-key-constraints-to-a-single-column). -{{site.data.alerts.end}} - -### Performance - -Because the foreign key constraint requires per-row checks on two tables, statements involving foreign key or referenced columns can take longer to execute. - -To improve query performance, we recommend doing the following: - -- Create a secondary index on all referencing foreign key columns that are not already indexed. - -- For bulk inserts into new tables with foreign key or referenced columns, use the [`IMPORT`](import.html) statement instead of [`INSERT`](insert.html). - - {{site.data.alerts.callout_danger}} - Using [`IMPORT INTO`](import-into.html) will invalidate foreign keys without a [`VALIDATE CONSTRAINT`](validate-constraint.html) statement. - {{site.data.alerts.end}} - -## Syntax - -Foreign key constraints can be defined at the [table level](#table-level). However, if you only want the constraint to apply to a single column, it can be applied at the [column level](#column-level). - -{{site.data.alerts.callout_info}} -You can also add the `FOREIGN KEY` constraint to existing tables through [`ADD CONSTRAINT`](add-constraint.html#add-the-foreign-key-constraint-with-cascade). -{{site.data.alerts.end}} - -### Column level - -
    {% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/foreign_key_column_level.html %}
    - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table you're creating. | -| `column_name` | The name of the foreign key column. | -| `column_type` | The foreign key column's [data type](data-types.html). | -| `parent_table` | The name of the table the foreign key references. | -| `ref_column_name` | The name of the column the foreign key references.

    If you do not include the `ref_column_name` you want to reference from the `parent_table`, CockroachDB uses the first column of `parent_table`'s primary key. -| `column_constraints` | Any other column-level [constraints](constraints.html) you want to apply to this column. | -| `column_def` | Definitions for any other columns in the table. | -| `table_constraints` | Any table-level [constraints](constraints.html) you want to apply. | - -**Example** - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE IF NOT EXISTS orders ( - id INT PRIMARY KEY, - customer INT NOT NULL REFERENCES customers (id) ON DELETE CASCADE, - orderTotal DECIMAL(9,2), - INDEX (customer) - ); -~~~ -{{site.data.alerts.callout_danger}} -`CASCADE` does not list objects it drops or updates, so it should be used cautiously. -{{site.data.alerts.end}} - -### Table level - -
    {% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/foreign_key_table_level.html %}
    - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table you're creating. | -| `column_def` | Definitions for the table's columns. | -| `name` | The name of the constraint. | -| `fk_column_name` | The name of the foreign key column. | -| `parent_table` | The name of the table the foreign key references. | -| `ref_column_name` | The name of the column the foreign key references.

    If you do not include the `column_name` you want to reference from the `parent_table`, CockroachDB uses the first column of `parent_table`'s primary key. -| `table_constraints` | Any other table-level [constraints](constraints.html) you want to apply. | - -**Example** - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE packages ( - customer INT, - "order" INT, - id INT, - address STRING(50), - delivered BOOL, - delivery_date DATE, - PRIMARY KEY (customer, "order", id), - CONSTRAINT fk_order FOREIGN KEY (customer, "order") REFERENCES orders - ); -~~~ - -## Usage examples - -### Use a foreign key constraint with default actions - -In this example, we'll create a table with a foreign key constraint with the default [actions](#foreign-key-actions) (`ON UPDATE NO ACTION ON DELETE NO ACTION`). - -First, create the referenced table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers (id INT PRIMARY KEY, email STRING UNIQUE); -~~~ - -Next, create the referencing table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE IF NOT EXISTS orders ( - id INT PRIMARY KEY, - customer INT NOT NULL REFERENCES customers (id), - orderTotal DECIMAL(9,2), - INDEX (customer) - ); -~~~ - -Let's insert a record into each table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO customers VALUES (1001, 'a@co.tld'), (1234, 'info@cockroachlabs.com'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO orders VALUES (1, 1002, 29.99); -~~~ -~~~ -pq: foreign key violation: value [1002] not found in customers@primary [id] -~~~ - -The second record insertion returns an error because the customer `1002` doesn't exist in the referenced table. - -Let's insert a record into the referencing table and try to update the referenced table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO orders VALUES (1, 1001, 29.99); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> UPDATE customers SET id = 1002 WHERE id = 1001; -~~~ -~~~ -pq: foreign key violation: value(s) [1001] in columns [id] referenced in table "orders" -~~~ - -The update to the referenced table returns an error because `id = 1001` is referenced and the default [foreign key action](#foreign-key-actions) is enabled (`ON UPDATE NO ACTION`). However, `id = 1234` is not referenced and can be updated: - -{% include_cached copy-clipboard.html %} -~~~ sql -> UPDATE customers SET id = 1111 WHERE id = 1234; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers; -~~~ -~~~ - id | email -+------+------------------------+ - 1001 | a@co.tld - 1111 | info@cockroachlabs.com -(2 rows) -~~~ - -Now let's try to delete a referenced row: - -{% include_cached copy-clipboard.html %} -~~~ sql -> DELETE FROM customers WHERE id = 1001; -~~~ -~~~ -pq: foreign key violation: value(s) [1001] in columns [id] referenced in table "orders" -~~~ - -Similarly, the deletion returns an error because `id = 1001` is referenced and the default [foreign key action](#foreign-key-actions) is enabled (`ON DELETE NO ACTION`). However, `id = 1111` is not referenced and can be deleted: - -{% include_cached copy-clipboard.html %} -~~~ sql -> DELETE FROM customers WHERE id = 1111; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers; -~~~ -~~~ - id | email -+------+----------+ - 1001 | a@co.tld -(1 row) -~~~ - -### Use a Foreign Key Constraint with `CASCADE` - -In this example, we'll create a table with a foreign key constraint with the [foreign key actions](#foreign-key-actions) `ON UPDATE CASCADE` and `ON DELETE CASCADE`. - -First, create the referenced table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers_2 ( - id INT PRIMARY KEY - ); -~~~ - -Then, create the referencing table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE orders_2 ( - id INT PRIMARY KEY, - customer_id INT REFERENCES customers_2(id) ON UPDATE CASCADE ON DELETE CASCADE - ); -~~~ - -Insert a few records into the referenced table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO customers_2 VALUES (1), (2), (3); -~~~ - -Insert some records into the referencing table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO orders_2 VALUES (100,1), (101,2), (102,3), (103,1); -~~~ - -Now, let's update an `id` in the referenced table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> UPDATE customers_2 SET id = 23 WHERE id = 1; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_2; -~~~ -~~~ - id -+----+ - 2 - 3 - 23 -(3 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_2; -~~~ -~~~ - id | customer_id -+-----+-------------+ - 100 | 23 - 101 | 2 - 102 | 3 - 103 | 23 -(4 rows) -~~~ - -When `id = 1` was updated to `id = 23` in `customers_2`, the update propagated to the referencing table `orders_2`. - -Similarly, a deletion will cascade. Let's delete `id = 23` from `customers_2`: - -{% include_cached copy-clipboard.html %} -~~~ sql -> DELETE FROM customers_2 WHERE id = 23; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_2; -~~~ -~~~ - id -+----+ - 2 - 3 -(2 rows) -~~~ - -Let's check to make sure the rows in `orders_2` where `customers_id = 23` were also deleted: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_2; -~~~ -~~~ - id | customer_id -+-----+-------------+ - 101 | 2 - 102 | 3 -(2 rows) -~~~ - -### Use a Foreign Key Constraint with `SET NULL` - -In this example, we'll create a table with a foreign key constraint with the [foreign key actions](#foreign-key-actions) `ON UPDATE SET NULL` and `ON DELETE SET NULL`. - -First, create the referenced table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers_3 ( - id INT PRIMARY KEY - ); -~~~ - -Then, create the referencing table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE orders_3 ( - id INT PRIMARY KEY, - customer_id INT REFERENCES customers_3(id) ON UPDATE SET NULL ON DELETE SET NULL - ); -~~~ - -Insert a few records into the referenced table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO customers_3 VALUES (1), (2), (3); -~~~ - -Insert some records into the referencing table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO orders_3 VALUES (100,1), (101,2), (102,3), (103,1); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_3; -~~~ -~~~ - id | customer_id -+-----+-------------+ - 100 | 1 - 101 | 2 - 102 | 3 - 103 | 1 -(4 rows) -~~~ - -Now, let's update an `id` in the referenced table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> UPDATE customers_3 SET id = 23 WHERE id = 1; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_3; -~~~ -~~~ - id -+----+ - 2 - 3 - 23 -(3 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_3; -~~~ -~~~ - id | customer_id -+-----+-------------+ - 100 | NULL - 101 | 2 - 102 | 3 - 103 | NULL -(4 rows) -~~~ - -When `id = 1` was updated to `id = 23` in `customers_3`, the referencing `customer_id` was set to `NULL`. - -Similarly, a deletion will set the referencing `customer_id` to `NULL`. Let's delete `id = 2` from `customers_3`: - -{% include_cached copy-clipboard.html %} -~~~ sql -> DELETE FROM customers_3 WHERE id = 2; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_3; -~~~ -~~~ - id -+----+ - 3 - 23 -(2 rows) -~~~ - -Let's check to make sure the row in `orders_3` where `customers_id = 2` was updated to `NULL`: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_3; -~~~ -~~~ - id | customer_id -+-----+-------------+ - 100 | NULL - 101 | NULL - 102 | 3 - 103 | NULL -(4 rows) -~~~ - -### Use a Foreign Key Constraint with `SET DEFAULT` - -In this example, we'll create a table with a `FOREIGN` constraint with the [foreign key actions](#foreign-key-actions) `ON UPDATE SET DEFAULT` and `ON DELETE SET DEFAULT`. - -First, create the referenced table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers_4 ( - id INT PRIMARY KEY - ); -~~~ - -Then, create the referencing table with the `DEFAULT` value for `customer_id` set to `9999`: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE orders_4 ( - id INT PRIMARY KEY, - customer_id INT DEFAULT 9999 REFERENCES customers_4(id) ON UPDATE SET DEFAULT ON DELETE SET DEFAULT - ); -~~~ - -Insert a few records into the referenced table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO customers_4 VALUES (1), (2), (3), (9999); -~~~ - -Insert some records into the referencing table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO orders_4 VALUES (100,1), (101,2), (102,3), (103,1); -~~~ - - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_4; -~~~ -~~~ - id | customer_id -+-----+-------------+ - 100 | 1 - 101 | 2 - 102 | 3 - 103 | 1 -(4 rows) -~~~ - -Now, let's update an `id` in the referenced table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> UPDATE customers_4 SET id = 23 WHERE id = 1; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_4; -~~~ -~~~ - id -+------+ - 2 - 3 - 23 - 9999 -(4 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_4; -~~~ -~~~ - id | customer_id -+-----+-------------+ - 100 | 9999 - 101 | 2 - 102 | 3 - 103 | 9999 -(4 rows) -~~~ - -When `id = 1` was updated to `id = 23` in `customers_4`, the referencing `customer_id` was set to `DEFAULT` (i.e., `9999`). You can see this in the first and last rows of `orders_4`, where `id = 100` and the `customer_id` is now `9999` - -Similarly, a deletion will set the referencing `customer_id` to the `DEFAULT` value. Let's delete `id = 2` from `customers_4`: - -{% include_cached copy-clipboard.html %} -~~~ sql -> DELETE FROM customers_4 WHERE id = 2; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_4; -~~~ -~~~ - id -+------+ - 3 - 23 - 9999 -(3 rows) -~~~ - -Let's check to make sure the corresponding `customer_id` value to `id = 101`, was updated to the `DEFAULT` value (i.e., `9999`) in `orders_4`: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_4; -~~~ -~~~ - id | customer_id -+-----+-------------+ - 100 | 9999 - 101 | 9999 - 102 | 3 - 103 | 9999 -(4 rows) -~~~ - -If the default value for the `customer_id` column is not set, and the column does not have a [`NOT NULL`](not-null.html) constraint, `ON UPDATE SET DEFAULT` and `ON DELETE SET DEFAULT` actions set referenced column values to `NULL`. - -For example, let's create a new `customers_5` table and insert some values: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers_5 ( - id INT PRIMARY KEY - ); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO customers_5 VALUES (1), (2), (3), (4); -~~~ - -Then we can create a new `orders_5` table that references the `customers_5` table, but with no default value specified for the `ON UPDATE SET DEFAULT` and `ON DELETE SET DEFAULT` actions: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE orders_5 ( - id INT PRIMARY KEY, - customer_id INT REFERENCES customers_5(id) ON UPDATE SET DEFAULT ON DELETE SET DEFAULT - ); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO orders_5 VALUES (200,1), (201,2), (202,3), (203,4); -~~~ - -Deleting and updating values in the `customers_5` table sets the referenced values in `orders_5` to `NULL`: - -{% include_cached copy-clipboard.html %} -~~~ sql -> DELETE FROM customers_5 WHERE id = 3; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> UPDATE customers_5 SET id = 0 WHERE id = 1; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_5; -~~~ -~~~ - id | customer_id -+-----+-------------+ - 200 | NULL - 201 | 2 - 202 | NULL - 203 | 4 -(4 rows) -~~~ - -### Add multiple foreign key constraints to a single column - - You can add more than one foreign key constraint to a single column. - -For example, if you create the following tables: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers ( - id INT PRIMARY KEY, - name STRING, - email STRING -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE orders ( - id INT PRIMARY KEY, - customer_id INT UNIQUE, - item_number INT - ); -~~~ - -You can create a table with a column that references columns in both the `customers` and `orders` tables: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE shipments ( - tracking_number UUID DEFAULT gen_random_uuid() PRIMARY KEY, - carrier STRING, - status STRING, - customer_id INT, - CONSTRAINT fk_customers FOREIGN KEY (customer_id) REFERENCES customers(id), - CONSTRAINT fk_orders FOREIGN KEY (customer_id) REFERENCES orders(customer_id) - ); -~~~ - -Inserts into the `shipments` table must fulfill both foreign key constraints on `customer_id` (`fk_customers` and `fk_customers_2`). - -Let's insert a record into each table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO customers VALUES (1001, 'Alexa', 'a@co.tld'), (1234, 'Evan', 'info@cockroachlabs.com'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO orders VALUES (1, 1001, 25), (2, 1234, 15), (3, 2000, 5); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO shipments (carrier, status, customer_id) VALUES ('USPS', 'Out for delivery', 1001); -~~~ - -The last statement succeeds because `1001` matches a unique `id` value in the `customers` table and a unique `customer_id` value in the `orders` table. If `1001` was in neither of the referenced columns, or in just one of them, the statement would return an error. - -For instance, the following statement fulfills just one of the foreign key constraints and returns an error: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO shipments (carrier, status, customer_id) VALUES ('DHL', 'At facility', 2000); -~~~ - -~~~ -ERROR: insert on table "shipments" violates foreign key constraint "fk_customers" -SQLSTATE: 23503 -DETAIL: Key (customer_id)=(2000) is not present in table "customers". -~~~ - -CockroachDB allows you to add multiple foreign key constraints on the same column, that reference the same column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE shipments ADD CONSTRAINT fk_customers_2 FOREIGN KEY (customer_id) REFERENCES customers(id) ON DELETE CASCADE; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CONSTRAINTS FROM shipments; -~~~ - -~~~ - table_name | constraint_name | constraint_type | details | validated --------------+-----------------+-----------------+----------------------------------------------------------------------+------------ - shipments | fk_customers | FOREIGN KEY | FOREIGN KEY (customer_id) REFERENCES customers(id) | true - shipments | fk_customers_2 | FOREIGN KEY | FOREIGN KEY (customer_id) REFERENCES customers(id) ON DELETE CASCADE | true - shipments | fk_orders | FOREIGN KEY | FOREIGN KEY (customer_id) REFERENCES orders(customer_id) | true - shipments | shipments_pkey | PRIMARY KEY | PRIMARY KEY (tracking_number ASC) | true -(4 rows) -~~~ - -There are now two foreign key constraints on `customer_id` that reference the `customers(id)` column (i.e., `fk_customers` and `fk_customers_2`). - -In the event of a `DELETE` or `UPDATE` to the referenced column (`customers(id)`), the action for the first foreign key specified takes precedence. In this case, that will be the default [action](#foreign-key-actions) (`ON UPDATE NO ACTION ON DELETE NO ACTION`) on the first foreign key constraint (`fk_customers`). This means that `DELETE`s on referenced columns will fail, even though the second foreign key constraint (`fk_customer_2`) is defined with the `ON DELETE CASCADE` action. - -{% include_cached copy-clipboard.html %} -~~~ sql -> DELETE FROM orders WHERE customer_id = 1001; -~~~ - -~~~ -ERROR: delete on table "orders" violates foreign key constraint "fk_orders" on table "shipments" -SQLSTATE: 23503 -DETAIL: Key (customer_id)=(1001) is still referenced from table "shipments". -~~~ - -### Match composite foreign keys with `MATCH SIMPLE` and `MATCH FULL` - -The examples in this section show how composite foreign key matching works for both the `MATCH SIMPLE` and `MATCH FULL` algorithms. For a conceptual overview, see [Composite foreign key matching](#composite-foreign-key-matching). - -First, let's create some tables. `parent` is a table with a composite key: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE parent (x INT, y INT, z INT, UNIQUE (x, y, z)); -~~~ - -`full_test` has a foreign key on `parent` that uses the `MATCH FULL` algorithm: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE full_test ( - x INT, - y INT, - z INT, - FOREIGN KEY (x, y, z) REFERENCES parent (x, y, z) MATCH FULL ON DELETE CASCADE ON UPDATE CASCADE - ); -~~~ - -`simple_test` has a foreign key on `parent` that uses the `MATCH SIMPLE` algorithm (the default): - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE simple_test ( - x INT, - y INT, - z INT, - FOREIGN KEY (x, y, z) REFERENCES parent (x, y, z) ON DELETE CASCADE ON UPDATE CASCADE - ); -~~~ - -Next, we populate `parent` with some values: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT - INTO parent - VALUES (1, 1, 1), - (2, 1, 1), - (1, 2, 1), - (1, 1, 2), - (NULL, NULL, NULL), - (1, NULL, NULL), - (NULL, 1, NULL), - (NULL, NULL, 1), - (1, 1, NULL), - (1, NULL, 1), - (NULL, 1, 1); -~~~ - -Now let's look at some `INSERT` statements to see how the different key matching algorithms work. - -- [MATCH SIMPLE](#match-simple) -- [MATCH FULL](#match-full) - -#### MATCH SIMPLE - -Inserting values into the table using the `MATCH SIMPLE` algorithm (described [above](#composite-foreign-key-matching)) gives the following results: - -| Statement | Can insert? | Throws error? | Notes | -|---------------------------------------------------+-------------+---------------+-------------------------------| -| `INSERT INTO simple_test VALUES (1,1,1)` | Yes | No | References `parent (1,1,1)`. | -| `INSERT INTO simple_test VALUES (NULL,NULL,NULL)` | Yes | No | Does not reference `parent`. | -| `INSERT INTO simple_test VALUES (1,NULL,NULL)` | Yes | No | Does not reference `parent`. | -| `INSERT INTO simple_test VALUES (NULL,1,NULL)` | Yes | No | Does not reference `parent`. | -| `INSERT INTO simple_test VALUES (NULL,NULL,1)` | Yes | No | Does not reference `parent`. | -| `INSERT INTO simple_test VALUES (1,1,NULL)` | Yes | No | Does not reference `parent`. | -| `INSERT INTO simple_test VALUES (1,NULL,1)` | Yes | No | Does not reference `parent`. | -| `INSERT INTO simple_test VALUES (NULL,1,1)` | Yes | No | Does not reference `parent`. | -| `INSERT INTO simple_test VALUES (2,2,NULL)` | Yes | No | Does not reference `parent`. | -| `INSERT INTO simple_test VALUES (2,2,2)` | No | Yes | No `parent` reference exists. | - -#### MATCH FULL - -Inserting values into the table using the `MATCH FULL` algorithm (described [above](#composite-foreign-key-matching)) gives the following results: - -| Statement | Can insert? | Throws error? | Notes | -|-------------------------------------------------+-------------+---------------+-----------------------------------------------------| -| `INSERT INTO full_test VALUES (1,1,1)` | Yes | No | References `parent(1,1,1)`. | -| `INSERT INTO full_test VALUES (NULL,NULL,NULL)` | Yes | No | Does not reference `parent`. | -| `INSERT INTO full_test VALUES (1,NULL,NULL)` | No | Yes | Can't mix null and non-null values in `MATCH FULL`. | -| `INSERT INTO full_test VALUES (NULL,1,NULL)` | No | Yes | Can't mix null and non-null values in `MATCH FULL`. | -| `INSERT INTO full_test VALUES (NULL,NULL,1)` | No | Yes | Can't mix null and non-null values in `MATCH FULL`. | -| `INSERT INTO full_test VALUES (1,1,NULL)` | No | Yes | Can't mix null and non-null values in `MATCH FULL`. | -| `INSERT INTO full_test VALUES (1,NULL,1)` | No | Yes | Can't mix null and non-null values in `MATCH FULL`. | -| `INSERT INTO full_test VALUES (NULL,1,1)` | No | Yes | Can't mix null and non-null values in `MATCH FULL`. | -| `INSERT INTO full_test VALUES (2,2,NULL)` | No | Yes | Can't mix null and non-null values in `MATCH FULL`. | -| `INSERT INTO full_test VALUES (2,2,2)` | No | Yes | No `parent` reference exists. | - -## See also - -- [Constraints](constraints.html) -- [`DROP CONSTRAINT`](drop-constraint.html) -- [`ADD CONSTRAINT`](add-constraint.html) -- [`CHECK` constraint](check.html) -- [`DEFAULT` constraint](default-value.html) -- [`NOT NULL` constraint](not-null.html) -- [`PRIMARY KEY` constraint](primary-key.html) -- [`UNIQUE` constraint](unique.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) -- [What is a Foreign Key? (With SQL Examples)](https://www.cockroachlabs.com/blog/what-is-a-foreign-key/) diff --git a/src/current/v22.1/frequently-asked-questions.md b/src/current/v22.1/frequently-asked-questions.md deleted file mode 100644 index fe8a055f3b4..00000000000 --- a/src/current/v22.1/frequently-asked-questions.md +++ /dev/null @@ -1,185 +0,0 @@ ---- -title: CockroachDB FAQs -summary: CockroachDB FAQs - What is CockroachDB? How does it work? What makes it different from other databases? -tags: postgres, cassandra, google cloud spanner -toc: true -docs_area: get_started ---- - -## Choosing CockroachDB - -### What is CockroachDB? - -{% include {{ page.version.version }}/faq/what-is-crdb.md %} - -### When is CockroachDB a good choice? - -CockroachDB is well suited for applications that require reliable, available, and correct data, and millisecond response times, regardless of scale. It is built to automatically replicate, rebalance, and recover with minimal configuration and operational overhead. Specific use cases include: - -- Distributed or replicated OLTP -- Multi-datacenter deployments -- Multi-region deployments -- Cloud migrations -- Infrastructure initiatives built for the cloud - - - -CockroachDB returns single-row reads in 2ms or less and single-row writes in 4ms or less, and supports a variety of [SQL and operational tuning practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}) for optimizing query performance. However, CockroachDB is not yet suitable for heavy analytics / OLAP. - -### How easy is it to get started with CockroachDB? - -You can get started with CockroachDB with just a few clicks. Sign up for a CockroachDB {{ site.data.products.cloud }} account to create a CockroachDB {{ site.data.products.standard }} cluster. For more details, see [Quickstart]({% link cockroachcloud/quickstart.md %}). - -Alternatively, you can download a binary or run our official Kubernetes configurations or Docker image. For more details, see [Install CockroachDB]({% link {{ page.version.version }}/install-cockroachdb.md %}). - -### How do I know which CockroachDB deployment option is right for my project? - -There are four ways to use and deploy CockroachDB: - -- **CockroachDB {{ site.data.products.basic }}**: A multi-tenant CockroachDB deployment, managed by Cockroach Labs. CockroachDB {{ site.data.products.basic }} provides highly available database clusters that scale instantly and automatically for small production and dev/test workloads. -- **CockroachDB {{ site.data.products.standard }}**: A multi-tenant CockroachDB deployment, managed by Cockroach Labs. CockroachDB {{ site.data.products.standard }} allows you to consolidate a variety of production workloads while optimizing cost. -- **CockroachDB {{ site.data.products.advanced }}**: A single tenant CockroachDB deployment, managed by Cockroach Labs. CockroachDB {{ site.data.products.advanced }} provides dedicated hardware to support stringent regulatory requirements and enhanced compliance, targeting production workloads with advanced Enterprise requirements. -- **CockroachDB {{ site.data.products.core }}**: A self-managed CockroachDB deployment, backed by Cockroach Labs Support, for multiple clouds and regions. This deployment option is good if you require complete control over the database environment and require [Enterprise features]({% link {{ page.version.version }}/enterprise-licensing.md %}). - -## About the database - -### How does CockroachDB scale? - -CockroachDB scales horizontally with minimal operator overhead. - -At the key-value level, CockroachDB starts off with a single, empty range. As you put data in, this single range eventually reaches [a threshold size]({% link {{ page.version.version }}/configure-replication-zones.md %}#range-max-bytes). When that happens, the data [splits into two ranges]({% link {{ page.version.version }}/architecture/distribution-layer.md %}#range-splits), each covering a contiguous segment of the entire key-value space. This process continues indefinitely; as new data flows in, existing ranges continue to split into new ranges, aiming to keep a relatively small and consistent range size. - -When your cluster spans multiple nodes (physical machines, virtual machines, or containers), newly split ranges are automatically rebalanced to nodes with more capacity. CockroachDB communicates opportunities for rebalancing using a peer-to-peer [gossip protocol](https://wikipedia.org/wiki/Gossip_protocol) by which nodes exchange network addresses, store capacity, and other information. - -For more information about scaling a CockroachDB cluster, see the following docs: - -- [Manage Your Advanced Cluster - Scale your cluster]({% link cockroachcloud/advanced-cluster-management.md %}#scale-your-cluster) -- [`cockroach start` - Add a node to a cluster]({% link {{ page.version.version }}/cockroach-start.md %}#add-a-node-to-a-cluster) - -### How does CockroachDB survive failures? - -CockroachDB is designed to survive software and hardware failures, from server restarts to datacenter outages. This is accomplished without confusing artifacts typical of other distributed systems (e.g., stale reads) using strongly-consistent replication as well as automated repair after failures. - -**Replication** - -CockroachDB replicates your data for availability and guarantees consistency between replicas using the [Raft consensus algorithm](https://raft.github.io/), a popular alternative to Paxos. You can [define the location of replicas]({% link {{ page.version.version }}/configure-replication-zones.md %}) in various ways, depending on the types of failures you want to secure against and your network topology. You can locate replicas on: - -- Different servers within a rack to tolerate server failures -- Different servers on different racks within a datacenter to tolerate rack power/network failures -- Different servers in different datacenters to tolerate large scale network or power outages - -In a CockroachDB cluster spread across multiple geographic regions, the round-trip latency between regions will have a direct effect on your database's performance. In such cases, it is important to think about the latency requirements of each table and then use the appropriate [data topologies]({% link {{ page.version.version }}/topology-patterns.md %}) to locate data for optimal performance and resiliency. For a step-by-step demonstration, see [Low Latency Multi-Region Deployment]({% link {{ page.version.version }}/demo-low-latency-multi-region-deployment.md %}). - -**Automated Repair** - -For short-term failures, such as a server restart, CockroachDB uses Raft to continue seamlessly as long as a majority of replicas remain available. Raft makes sure that a new “leader” for each group of replicas is elected if the former leader fails, so that transactions can continue and affected replicas can rejoin their group once they’re back online. For longer-term failures, such as a server/rack going down for an extended period of time or a datacenter outage, CockroachDB automatically rebalances replicas from the missing nodes, using the unaffected replicas as sources. Using capacity information from the gossip network, new locations in the cluster are identified and the missing replicas are re-replicated in a distributed fashion using all available nodes and the aggregate disk and network bandwidth of the cluster. - -### How is CockroachDB strongly-consistent? - -CockroachDB guarantees [serializable SQL transactions]({% link {{ page.version.version }}/demo-serializable.md %}), the highest isolation level defined by the SQL standard. It does so by combining the Raft consensus algorithm for writes and a custom time-based synchronization algorithms for reads. - -- Stored data is versioned with MVCC, so [reads simply limit their scope to the data visible at the time the read transaction started]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#time-and-hybrid-logical-clocks). - -- Writes are serviced using the [Raft consensus algorithm](https://raft.github.io/), a popular alternative to Paxos. A consensus algorithm guarantees that any majority of replicas together always agree on whether an update was committed successfully. Updates (writes) must reach a majority of replicas (2 out of 3 by default) before they are considered committed. - - To ensure that a write transaction does not interfere with read transactions that start after it, CockroachDB also uses a [timestamp cache]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#timestamp-cache) which remembers when data was last read by ongoing transactions. - - This ensures that clients always observe serializable consistency with regards to other concurrent transactions. - -### How is CockroachDB both highly available and strongly consistent? - -The [CAP theorem](https://wikipedia.org/wiki/CAP_theorem) states that it is impossible for a distributed system to simultaneously provide more than two out of the following three guarantees: - -- Consistency -- Availability -- Partition Tolerance - -CockroachDB is a CP (consistent and partition tolerant) system. This means -that, in the presence of partitions, the system will become unavailable rather than do anything which might cause inconsistent results. For example, writes require acknowledgments from a majority of replicas, and reads require a lease, which can only be transferred to a different node when writes are possible. - -Separately, CockroachDB is also Highly Available, although "available" here means something different than the way it is used in the CAP theorem. In the CAP theorem, availability is a binary property, but for High Availability, we talk about availability as a spectrum (using terms like "five nines" for a system that is available 99.999% of the time). - -Being both CP and HA means that whenever a majority of replicas can talk to each other, they should be able to make progress. For example, if you deploy CockroachDB to three datacenters and the network link to one of them fails, the other two datacenters should be able to operate normally with only a few seconds' disruption. We do this by attempting to detect partitions and failures quickly and efficiently, [transferring leadership to nodes that are able to communicate with the majority]({% link {{ page.version.version }}/architecture/replication-layer.md %}#how-leases-are-transferred-from-a-dead-node), and routing internal traffic away from nodes that are partitioned away. - -### Why is CockroachDB SQL? - -At the lowest level, CockroachDB is a distributed, strongly-consistent, transactional key-value store, but the external API is Standard SQL with extensions. This provides developers familiar relational concepts such as schemas, tables, columns, and indexes and the ability to structure, manipulate, and query data using well-established and time-proven tools and processes. Also, since CockroachDB supports the PostgreSQL wire protocol, it’s simple to get your application talking to Cockroach; just find your [PostgreSQL language-specific driver]({% link {{ page.version.version }}/install-client-drivers.md %}) and start building. - -For more details, learn our [basic CockroachDB SQL statements]({% link {{ page.version.version }}/learn-cockroachdb-sql.md %}), explore the [full SQL grammar]({% link {{ page.version.version }}/sql-grammar.md %}), and try it out via our [built-in SQL client]({% link {{ page.version.version }}/cockroach-sql.md %}). Also, to understand how CockroachDB maps SQL table data to key-value storage and how CockroachDB chooses the best index for running a query, see [SQL in CockroachDB](https://www.cockroachlabs.com/blog/sql-in-cockroachdb-mapping-table-data-to-key-value-storage/) and [Index Selection in CockroachDB](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/). - -### Does CockroachDB support distributed transactions? - -Yes. CockroachDB distributes transactions across your cluster, whether it’s a few servers in a single location or many servers across multiple datacenters. Unlike with sharded setups, you do not need to know the precise location of data; you just talk to any node in your cluster and CockroachDB gets your transaction to the right place seamlessly. Distributed transactions proceed without downtime or additional latency while rebalancing is underway. You can even move tables – or entire databases – between data centers or cloud infrastructure providers while the cluster is under load. - -### Do transactions in CockroachDB guarantee ACID semantics? - -Yes. Every [transaction]({% link {{ page.version.version }}/transactions.md %}) in CockroachDB guarantees [ACID semantics](https://en.wikipedia.org/wiki/ACID) spanning arbitrary tables and rows, even when data is distributed. - -- **Atomicity:** Transactions in CockroachDB are “all or nothing.” If any part of a transaction fails, the entire transaction is aborted, and the database is left unchanged. If a transaction succeeds, all mutations are applied together with virtual simultaneity. For a detailed discussion of atomicity in CockroachDB transactions, see [How CockroachDB Distributes Atomic Transactions](https://www.cockroachlabs.com/blog/how-cockroachdb-distributes-atomic-transactions/). -- **Consistency:** SQL operations never see any intermediate states and move the database from one valid state to another, keeping indexes up to date. Operations always see the results of previously completed statements on overlapping data and maintain specified constraints such as unique columns. For a detailed look at how we've tested CockroachDB for correctness and consistency, see [CockroachDB Beta Passes Jepsen Testing](https://www.cockroachlabs.com/blog/cockroachdb-beta-passes-jepsen-testing/). -- **Isolation:** Transactions in CockroachDB implement the strongest ANSI isolation level: serializable (`SERIALIZABLE`). This means that transactions will never result in anomalies. For more information about transaction isolation in CockroachDB, see [Transactions: Isolation Levels]({% link {{ page.version.version }}/transactions.md %}#isolation-levels). -- **Durability:** In CockroachDB, every acknowledged write has been persisted consistently on a majority of replicas (by default, at least 2) via the [Raft consensus algorithm](https://raft.github.io/). Power or disk failures that affect only a minority of replicas (typically 1) do not prevent the cluster from operating and do not lose any data. - -### Since CockroachDB is inspired by Spanner, does it require atomic clocks to synchronize time? - -No. CockroachDB was designed to work without atomic clocks or GPS clocks. It’s a database intended to be run on arbitrary collections of nodes, from physical servers in a corp development cluster to public cloud infrastructure using the flavor-of-the-month virtualization layer. It’d be a showstopper to require an external dependency on specialized hardware for clock synchronization. However, CockroachDB does require moderate levels of clock synchronization for correctness. If clocks drift past a maximum threshold, nodes will be taken offline. It's therefore highly recommended to run [NTP](http://www.ntp.org/) or other clock synchronization software on each node. - -For more details on how CockroachDB handles unsynchronized clocks, see [Clock Synchronization]({% link {{ page.version.version }}/recommended-production-settings.md %}#clock-synchronization). And for a broader discussion of clocks, and the differences between clocks in Spanner and CockroachDB, see [Living Without Atomic Clocks](https://www.cockroachlabs.com/blog/living-without-atomic-clocks/). - -### What languages can I use to work with CockroachDB? - -CockroachDB supports the PostgreSQL wire protocol, so you can use any available PostgreSQL client drivers. We've tested it from the following languages: - -- JavaScript/TypeScript -- Python -- Go -- Java -- Ruby -- C -- C#(.NET) -- Rust - -See [Install Client Drivers]({% link {{ page.version.version }}/install-client-drivers.md %}) for more details. - -### Why does CockroachDB use the PostgreSQL wire protocol instead of the MySQL protocol? - -CockroachDB uses the PostgreSQL wire protocol because it is better documented than the MySQL protocol, and because PostgreSQL has a liberal Open Source license, similar to BSD or MIT licenses, whereas MySQL has the more restrictive GNU General Public License. - -Note, however, that the protocol used doesn't significantly impact how easy it is to port applications. Swapping out SQL network drivers is rather straightforward in nearly every language. What makes it hard to move from one database to another is the dialect of SQL in use. CockroachDB's dialect is based on PostgreSQL as well. - -### Can a PostgreSQL or MySQL application be migrated to CockroachDB? - -Yes. Most users should be able to follow the instructions in [Migrate from PostgreSQL]({% link {{ page.version.version }}/migrate-from-postgres.md %}) or [Migrate from MySQL]({% link {{ page.version.version }}/migrate-from-mysql.md %}). Due to differences in available features and syntax, some features supported by these databases may require manual effort to port to CockroachDB. Check those pages for details. - -We also fully support [importing your data via CSV]({% link {{ page.version.version }}/migrate-from-csv.md %}). - -### What is CockroachDB’s security model? - -You can run a secure or insecure CockroachDB cluster. When secure, client/node and inter-node communication is encrypted, and SSL certificates authenticate the identity of both clients and nodes. When insecure, there's no encryption or authentication. - -Also, CockroachDB supports common SQL privileges on databases and tables. The `root` user has privileges for all databases, while unique users can be granted privileges for specific statements at the database and table-levels. - -For more details, see our [Security Overview]({% link {{ page.version.version }}/security-reference/security-overview.md %}). - -## How CockroachDB compares - -### How does CockroachDB compare to MySQL or PostgreSQL? - -While all of these databases support SQL syntax, CockroachDB is the only one that scales easily (without the manual complexity of sharding), rebalances and repairs itself automatically, and distributes transactions seamlessly across your cluster. - -For more insight, see [CockroachDB in Comparison]({% link {{ page.version.version }}/cockroachdb-in-comparison.md %}). - -### How does CockroachDB compare to Cassandra, HBase, MongoDB, or Riak? - -While all of these are distributed databases, only CockroachDB supports distributed transactions and provides strong consistency. Also, these other databases provide custom APIs, whereas CockroachDB offers standard SQL with extensions. - -For more insight, see [CockroachDB in Comparison]({% link {{ page.version.version }}/cockroachdb-in-comparison.md %}). - -## Have questions that weren’t answered? - -Try searching the rest of our docs for answers or using our other [support resources]({% link {{ page.version.version }}/support-resources.md %}), including: - -- [CockroachDB Community Forum](https://forum.cockroachlabs.com) -- [CockroachDB Community Slack](https://cockroachdb.slack.com) -- [StackOverflow](http://stackoverflow.com/questions/tagged/cockroachdb) -- [CockroachDB Support Portal](https://support.cockroachlabs.com) diff --git a/src/current/v22.1/functions-and-operators.md b/src/current/v22.1/functions-and-operators.md deleted file mode 100644 index f847c14bd04..00000000000 --- a/src/current/v22.1/functions-and-operators.md +++ /dev/null @@ -1,145 +0,0 @@ ---- -title: Functions and Operators -summary: CockroachDB supports many built-in functions, aggregate functions, and operators. -toc: true -docs_area: reference.sql ---- - -CockroachDB supports the following SQL functions and operators for use in [scalar expressions](scalar-expressions.html). - -{{site.data.alerts.callout_success}}In the built-in SQL shell, use \hf [function] to get inline help about a specific function.{{site.data.alerts.end}} - -## Special syntax forms - -The following syntax forms are recognized for compatibility with the -SQL standard and PostgreSQL, but are equivalent to regular built-in -functions: - -{% include {{ page.version.version }}/sql/function-special-forms.md %} - -## Function volatility - -A function's _volatility_ is a promise to the [optimizer](cost-based-optimizer.html) about the behavior of the function. - -Type | Description | Examples --------|-------------|---------- -Volatile | The function can modify the state of the database and is not guaranteed to return the same results given the same arguments in any context. | `random`, `crdb_internal.force_error`, `nextval`, `now` -Stable | The function is guaranteed to return the same results given the same arguments whenever it is evaluated within the same statement. The optimizer can optimize multiple calls of the function to a single call. | `current_timestamp`, `current_date` -Immutable | The function does not depend on configuration settings and is guaranteed to return the same results given the same arguments in any context. The optimizer can pre-evaluate the function when a query calls it with constant arguments. | `log`, `from_json` -Leakproof | The function does not depend on configuration settings and is guaranteed to return the same results given the same arguments in any context. In addition, no information about the arguments is conveyed except via the return value. Any function that might throw an error depending on the values of its arguments is not leakproof. Leakproof is strictly stronger than Immutable. | Integer [comparison](#comparison-functions) - -## Conditional and function-like operators - -The following table lists the operators that look like built-in -functions but have special evaluation rules: - - Operator | Description -----------|------------- - `ANNOTATE_TYPE(...)` | [Explicitly Typed Expression](scalar-expressions.html#explicitly-typed-expressions) - `ARRAY(...)` | [Conversion of Subquery Results to An Array](scalar-expressions.html#conversion-of-subquery-results-to-an-array) - `ARRAY[...]` | [Conversion of Scalar Expressions to An Array](scalar-expressions.html#array-constructors) - `CAST(...)` | [Type Cast](scalar-expressions.html#explicit-type-coercions) - `COALESCE(...)` | [First non-NULL expression with Short Circuit](scalar-expressions.html#coalesce-and-ifnull-expressions) - `EXISTS(...)` | [Existence Test on the Result of Subqueries](scalar-expressions.html#existence-test-on-the-result-of-subqueries) - `IF(...)` | [Conditional Evaluation](scalar-expressions.html#if-expressions) - `IFNULL(...)` | Alias for `COALESCE` restricted to two operands - `NULLIF(...)` | [Return `NULL` conditionally](scalar-expressions.html#nullif-expressions) - `ROW(...)` | [Tuple Constructor](scalar-expressions.html#tuple-constructors) - -## Built-in functions - -{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/{{ page.release_info.crdb_branch_name }}/docs/generated/sql/functions.md %} - -## Aggregate functions - -For examples showing how to use aggregate functions, see [the `SELECT` clause documentation](select-clause.html#aggregate-functions). - -{{site.data.alerts.callout_info}} -Non-commutative aggregate functions are sensitive to the order in which the rows are processed in the surrounding [`SELECT` clause](select-clause.html#aggregate-functions). To specify the order in which input rows are processed, you can add an [`ORDER BY`](order-by.html) clause within the function argument list. For examples, see the [`SELECT` clause](select-clause.html#order-aggregate-function-input-rows-by-column) documentation. -{{site.data.alerts.end}} - -{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/{{ page.release_info.crdb_branch_name }}/docs/generated/sql/aggregates.md %} - -## Window functions - -{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/{{ page.release_info.crdb_branch_name }}/docs/generated/sql/window_functions.md %} - -## Operators - -The following table lists all CockroachDB operators from highest to lowest precedence, i.e., the order in which they will be evaluated within a statement. Operators with the same precedence are left associative. This means that those operators are grouped together starting from the left and moving right. - -| Order of Precedence | Operator | Name | Operator Arity | -| ------------------- | -------- | ---- | -------------- | -| 1 | `.` | Member field access operator | binary | -| 2 | `::` | [Type cast](scalar-expressions.html#explicit-type-coercions) | binary | -| 3 | `-` | Unary minus | unary (prefix) | -| | `~` | Bitwise not | unary (prefix) | -| 4 | `^` | Exponentiation | binary | -| 5 | `*` | Multiplication | binary | -| | `/` | Division | binary | -| | `//` | Floor division | binary | -| | `%` | Modulo | binary | -| 6 | `+` | Addition | binary | -| | `-` | Subtraction | binary | -| 7 | `<<` | Bitwise left-shift | binary | -| | `>>` | Bitwise right-shift | binary | -| | `&&` | Overlaps | binary | -| 8 | `&` | Bitwise AND | binary | -| 9 | `#` | Bitwise XOR | binary | -| 10 | | | Bitwise OR | binary | -| 11 | || | Concatenation | binary | -| | `< ANY`, ` SOME`, ` ALL` | [Multi-valued] "less than" comparison | binary | -| | `> ANY`, ` SOME`, ` ALL` | [Multi-valued] "greater than" comparison | binary | -| | `= ANY`, ` SOME`, ` ALL` | [Multi-valued] "equal" comparison | binary | -| | `<= ANY`, ` SOME`, ` ALL` | [Multi-valued] "less than or equal" comparison | binary | -| | `>= ANY`, ` SOME`, ` ALL` | [Multi-valued] "greater than or equal" comparison | binary | -| | `<> ANY` / `!= ANY`, `<> SOME` / `!= SOME`, `<> ALL` / `!= ALL` | [Multi-valued] "not equal" comparison | binary | -| | `[NOT] LIKE ANY`, `[NOT] LIKE SOME`, `[NOT] LIKE ALL` | [Multi-valued] `LIKE` comparison | binary | -| | `[NOT] ILIKE ANY`, `[NOT] ILIKE SOME`, `[NOT] ILIKE ALL` | [Multi-valued] `ILIKE` comparison | binary | -| | `->` | Access a JSONB field, returning a JSONB value. | binary | -| | `->>` | Access a JSONB field, returning a string. | binary | -| | `@>` | Tests whether the left JSONB field contains the right JSONB field. | binary | -| | `>@` | Tests whether the left JSONB field is contained by the right JSONB field. | binary | -| | `#>` | Access a JSONB field at the specified path, returning a JSONB value. | binary | -| | `#>>` | Access a JSONB field at the specified path, returning a string. | binary | -| | `?` | Does the key or element string exist within the JSONB value? | binary | -| | `?&` | Do all the key or element strings exist within the JSONB value? | binary | -| | ?| | Do any of the key or element strings exist within the JSONB value? | binary | -| 12 | `[NOT] BETWEEN` | Value is [not] within the range specified | binary | -| | `[NOT] BETWEEN SYMMETRIC` | Like `[NOT] BETWEEN`, but in non-sorted order. For example, whereas `a BETWEEN b AND c` means `b <= a <= c`, `a BETWEEN SYMMETRIC b AND c` means `(b <= a <= c) OR (c <= a <= b)`. | binary | -| | `[NOT] IN` | Value is [not] in the set of values specified | binary | -| | `[NOT] LIKE` | Matches [or not] LIKE expression, case sensitive | binary | -| | `[NOT] ILIKE` | Matches [or not] LIKE expression, case insensitive | binary | -| | `[NOT] SIMILAR` | Matches [or not] SIMILAR TO regular expression | binary | -| | `~` | Matches regular expression, case sensitive | binary | -| | `!~` | Does not match regular expression, case sensitive | binary | -| | `~*` | Matches regular expression, case insensitive | binary | -| | `!~*` | Does not match regular expression, case insensitive | binary | -| 13 | `=` | Equal | binary | -| | `<` | Less than | binary | -| | `>` | Greater than | binary | -| | `<=` | Less than or equal to | binary | -| | `>=` | Greater than or equal to | binary | -| | `!=`, `<>` | Not equal | binary | -| 14 | `IS [DISTINCT FROM]` | Equal, considering `NULL` as value | binary | -| | `IS NOT [DISTINCT FROM]` | `a IS NOT b` equivalent to `NOT (a IS b)` | binary | -| | `ISNULL`, `IS UNKNOWN` , `NOTNULL`, `IS NOT UNKNOWN` | Equivalent to `IS NULL` / `IS NOT NULL` | unary (postfix) | -| | `IS NAN`, `IS NOT NAN` | [Comparison with the floating-point NaN value](scalar-expressions.html#comparison-with-nan) | unary (postfix) | -| | `IS OF(...)` | Type predicate | unary (postfix) -| 15 | `NOT` | [Logical NOT](scalar-expressions.html#logical-operators) | unary | -| 16 | `AND` | [Logical AND](scalar-expressions.html#logical-operators) | binary | -| 17 | `OR` | [Logical OR](scalar-expressions.html#logical-operators) | binary | - -[Multi-valued]: scalar-expressions.html#multi-valued-comparisons - -### Supported operations - -{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/{{ page.release_info.crdb_branch_name }}/docs/generated/sql/operators.md %} - - diff --git a/src/current/v22.1/geojson.md b/src/current/v22.1/geojson.md deleted file mode 100644 index c7a6ca33e7d..00000000000 --- a/src/current/v22.1/geojson.md +++ /dev/null @@ -1,204 +0,0 @@ ---- -title: GeoJSON -summary: The GeoJSON data format for representing spatial information is based on JavaScript Object Notation (JSON). -toc: true -docs_area: reference.sql ---- - -GeoJSON is a textual data format for representing spatial information. It is based on [JavaScript Object Notation (JSON)](https://www.json.org). - -GeoJSON can be used to represent the following spatial objects, which also have [Well Known Text (WKT)](well-known-text.html) and [Well Known Binary (WKB)](well-known-binary.html) representations: - -- Point -- LineString -- Polygon -- MultiPoint -- MultiLineString -- MultiPolygon -- GeometryCollection - -GeoJSON introduces the following additional concepts, which are not part of WKT or WKB: - -- A "Feature" object that can contain a geometric shape and some additional properties that describe that shape. This is useful, for example, when drawing maps on the internet in color, such as on [geojson.io](http://geojson.io). For an example showing how to add color to a GeoJSON feature, [see below](#geojson-features-example). -- Features can additionally be grouped together into a "FeatureCollection". - -{{site.data.alerts.callout_success}} -For more detailed information, see the [GeoJSON RFC](https://www.rfc-editor.org/rfc/rfc7946.txt). -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -GeoJSON should only be used for spatial data that uses the [WGS84](spatial-glossary.html) geographic spatial reference system. For more information, see [SRID 4326](srid-4326.html). -{{site.data.alerts.end}} - -## Example - -In the example below, we will convert a shape represented in [Well Known Text](well-known-text.html) to GeoJSON using the `ST_AsGeoJSON` [function](functions-and-operators.html#spatial-functions). - -Here is the WKT: - -~~~ -SRID=4326;POLYGON((-87.906471 43.038902, -95.992775 36.153980, -75.704722 36.076944, -87.906471 43.038902), (-87.623177 41.881832, -90.199402 38.627003, -82.446732 38.413651, -87.623177 41.881832)) -~~~ - -Convert it to GeoJSON using the `ST_AsGeoJSON` function: - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT ST_AsGeoJSON('SRID=4326;POLYGON((-87.906471 43.038902, -95.992775 36.153980, -75.704722 36.076944, -87.906471 43.038902), (-87.623177 41.881832, -90.199402 38.627003, -82.446732 38.413651, -87.623177 41.881832))'); -~~~ - -This is the JSON output of the above, but formatted: - -{% include_cached copy-clipboard.html %} -~~~ json -{ - "type": "Polygon", - "coordinates": [ - [ - [ - -87.906471, - 43.038902 - ], - [ - -95.992775, - 36.15398 - ], - [ - -75.704722, - 36.076944 - ], - [ - -87.906471, - 43.038902 - ] - ], - [ - [ - -87.623177, - 41.881832 - ], - [ - -90.199402, - 38.627003 - ], - [ - -82.446732, - 38.413651 - ], - [ - -87.623177, - 41.881832 - ] - ] - ] -} -~~~ - - - -The JSON below is modified from the output above: it is grouped into a GeoJSON `FeatureCollection` in which each `Feature` has additional styling information (in the `properties` field) that can be used in visualization tools such as [geojson.io](http://geojson.io): - -{% include_cached copy-clipboard.html %} -~~~ json -{ - "type": "FeatureCollection", - "features": [ - { - "properties": { - "fill-opacity": 0.3, - "stroke": "#30D5C8", - "stroke-width": 5, - "fill": "#30D5C8" - }, - "geometry": { - "coordinates": [ - [ - [ - -87.906471, - 43.038902 - ], - [ - -95.992775, - 36.15398 - ], - [ - -75.704722, - 36.076944 - ], - [ - -87.906471, - 43.038902 - ] - ], - [ - [ - -87.623177, - 41.881832 - ], - [ - -90.199402, - 38.627003 - ], - [ - -82.446732, - 38.413651 - ], - [ - -87.623177, - 41.881832 - ] - ] - ], - "type": "Polygon" - }, - "type": "Feature" - }, - { - "properties": { - "stroke": "yellow", - "fill-opacity": 0.3, - "stroke-width": 9, - "fill": "yellow" - }, - "geometry": { - "type": "LineString", - "coordinates": [ - [ - -87.623177, - 41.881832 - ], - [ - -90.199402, - 38.627003 - ], - [ - -82.446732, - 38.413651 - ], - [ - -87.623177, - 41.881832 - ] - ] - }, - "type": "Feature" - } - ] -} -~~~ - -Here is the geometry described above as shown on [geojson.io](http://geojson.io): - -GeoJSON.io output - -## See also - -- [GeoJSON RFC](https://www.rfc-editor.org/rfc/rfc7946.txt) -- [Spatial features](spatial-features.html) -- [Spatial tutorial](spatial-tutorial.html) -- [Spatial indexes](spatial-indexes.html) -- [Spatial and GIS Glossary of Terms](spatial-glossary.html) -- [Well known text](well-known-text.html) -- [Well known binary](well-known-binary.html) -- [SRID 4326 - longitude and latitude](srid-4326.html) -- [Using GeoServer with CockroachDB](geoserver.html) diff --git a/src/current/v22.1/geometrycollection.md b/src/current/v22.1/geometrycollection.md deleted file mode 100644 index 46cd8eda870..00000000000 --- a/src/current/v22.1/geometrycollection.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: GEOMETRYCOLLECTION -summary: A GEOMETRYCOLLECTION is used for gathering one or more of the spatial object types into a group. -toc: true -docs_area: reference.sql ---- - -A `GEOMETRYCOLLECTION` is a collection of heterogeneous [spatial objects](spatial-features.html#spatial-objects), such as [Points](point.html), [LineStrings](linestring.html), [Polygons](polygon.html), or other `GEOMETRYCOLLECTION`s. It provides a way of referring to a group of spatial objects as one "thing" so that you can operate on it/them more conveniently using various SQL functions. - -{% include {{page.version.version}}/spatial/zmcoords.md %} - -## Examples - -A GeometryCollection can be created from SQL by calling the `st_geomfromtext` function on a GeometryCollection definition expressed in the [Well Known Text (WKT)](spatial-glossary.html#wkt) format as shown below. - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT ST_GeomFromText('GEOMETRYCOLLECTION(POINT(0 0), LINESTRING(0 0, 1440 900), POLYGON((0 0, 0 1024, 1024 1024, 1024 0, 0 0)))'); -~~~ - -~~~ - st_geomfromtext --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - 0107000000030000000101000000000000000000000000000000000000000102000000020000000000000000000000000000000000000000000000008096400000000000208C40010300000001000000050000000000000000000000000000000000000000000000000000000000000000009040000000000000904000000000000090400000000000009040000000000000000000000000000000000000000000000000 -(1 row) -~~~ - -## See also - -- [Spatial tutorial](spatial-tutorial.html) -- [Spatial objects](spatial-features.html#spatial-objects) -- [POINT](point.html) -- [LINESTRING](linestring.html) -- [POLYGON](polygon.html) -- [MULTIPOINT](multipoint.html) -- [MULTILINESTRING](multilinestring.html) -- [MULTIPOLYGON](multipolygon.html) -- [Using GeoServer with CockroachDB](geoserver.html) diff --git a/src/current/v22.1/geoserver.md b/src/current/v22.1/geoserver.md deleted file mode 100644 index 6894a813edd..00000000000 --- a/src/current/v22.1/geoserver.md +++ /dev/null @@ -1,207 +0,0 @@ ---- -title: Using GeoServer with CockroachDB -summary: Tutorial for configuring GeoServer to use CockroachDB. -toc: true -toc_not_nested: true -docs_area: ---- - -This page has instructions for configuring [GeoServer](http://geoserver.org) to use CockroachDB as the underlying database. - -The instructions here reuse parts of the data set described in the [Spatial Data tutorial](spatial-tutorial.html), specifically the `tutorial.roads` table, which contains the [U.S. National Atlas data set](https://www.sciencebase.gov/catalog/file/get/581d052be4b08da350d524ce?f=__disk__60%2F6b%2F4e%2F606b4e564884da8cca57ffeb229cd817006616e0&transform=1&allowOpen=true). - -Many of the instructions on this page come from the following GeoServer documentation pages: - -- [Using the web administration interface](https://docs.geoserver.org/stable/en/user/gettingstarted/web-admin-quickstart/index.html) -- [Publishing a PostGIS table](https://docs.geoserver.org/stable/en/user/gettingstarted/postgis-quickstart/index.html). - -## Before you begin - -You must have the following set up before proceeding with this tutorial: - -1. CockroachDB [installed on the local machine](install-cockroachdb.html) -1. GeoServer [installed on the local machine](https://docs.geoserver.org/stable/en/user/installation/index.html#installation). - -These instructions assume you are running on a UNIX-like system. - -{{site.data.alerts.callout_success}} -Mac users who use [Homebrew](https://brew.sh) can install GeoServer by typing `brew install geoserver`. -{{site.data.alerts.end}} - -## Step 1. Start CockroachDB and connect to your cluster - -Start a CockroachDB cluster by following the instructions in [Start a Local Cluster](start-a-local-cluster.html). - -## Step 2. Load spatial data - -Connect to the running cluster from the [SQL client](cockroach-sql.html) and enter the statements below. - -First, [create](create-database.html) the `tutorial` database: - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE DATABASE tutorial; -~~~ - -Next, switch to the `tutorial` database: - -{% include_cached copy-clipboard.html %} -~~~ sql -USE tutorial; -~~~ - -Finally, load the spatial data set: - -{% include_cached copy-clipboard.html %} -~~~ sql -IMPORT PGDUMP ('https://spatial-tutorial.s3.us-east-2.amazonaws.com/bookstores-and-roads-20210125.sql') WITH ignore_unsupported_statements; -~~~ - -## Step 3. Turn on CockroachDB's experimental box comparison operators - -CockroachDB's support for GeoServer is still in development. To use CockroachDB with GeoServer, you will need to enable the use of certain experimental box2d comparison operators by changing the following [cluster setting](cluster-settings.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -SET CLUSTER SETTING sql.spatial.experimental_box2d_comparison_operators.enabled = ON; -~~~ - -The reasons the box2d comparison operators are experimental in CockroachDB are as follows: - -- PostGIS uses the `&&`, `~`, and `@` operators to do bounding box comparisons. These comparisons can always be index-accelerated by PostgreSQL since it uses [R-tree based indexing](https://en.wikipedia.org/wiki/R-tree) to generate its coverings. -- CockroachDB [uses a different indexing strategy based on space-filling curves](spatial-indexes.html) since this is necessary for [scaling horizontally](frequently-asked-questions.html#how-does-cockroachdb-scale). - - This means that the coverings generated by CockroachDB's `&&`, `~`, and `@` operators for index-accelerated lookups are not the same as the bounding box coverings generated by PostGIS. - - In practice, CockroachDB may return a smaller set of results for the same query, as the space-filling curve covering is often more exact than a bounding box covering, and will exclude from the result set any geometries that have an intersecting bounding box but where no part of the geometry hits the actual bounding box. - - Note that the behavior described above only applies to index-accelerated lookups. - -## Step 4. Start GeoServer - -The easiest place to create the GeoServer data directory is in your user's home directory. - -In the UNIX shell, run the following command: - -{% include_cached copy-clipboard.html %} -~~~ shell -mkdir -p $HOME/geoserver -~~~ - -Next, start GeoServer by running the following command: - -{% include_cached copy-clipboard.html %} -~~~ shell -geoserver $HOME/geoserver -~~~ - -You should see some log output that looks like the following: - -~~~ -2021-06-23 11:44:22.617:INFO::main: Logging initialized @721ms to org.eclipse.jetty.util.log.StdErrLog -2021-06-23 11:44:22.944:INFO:oejs.Server:main: jetty-9.4.36.v20210114; built: 2021-01-14T16:44:28.689Z; git: 238ec6997c7806b055319a6d11f8ae7564adc0de; jvm 1.8.0_282-b08 -2021-06-23 11:44:22.969:INFO:oejdp.ScanningAppProvider:main: Deployment monitor [file:///usr/local/Cellar/geoserver/2.19.1/libexec/webapps/] at interval 1 -2021-06-23 11:44:23.732:INFO:oejw.StandardDescriptorProcessor:main: NO JSP Support for /geoserver, did not find org.eclipse.jetty.jsp.JettyJspServlet -2021-06-23 11:44:24.687:INFO:oejs.session:main: DefaultSessionIdManager workerName=node0 -2021-06-23 11:44:24.687:INFO:oejs.session:main: No SessionScavenger set, using defaults -... -~~~ - -## Step 5. Log in to GeoServer - -In this and the following steps we will set up GeoServer so it can access the spatial data we loaded in [Step 2](#step-2-load-spatial-data). - -Open your web browser and navigate to your [locally running GeoServer instance](http://localhost:8080/geoserver/web/). Log in using the default credentials: username `admin`, password `geoserver`. - -## Step 6. Set up a GeoServer Workspace - -In the left-hand navigation menu, click **Data > Workspaces**. The **Workspaces** page will load. Click the **Add new workspace** button. - -On the **New Workspace** page, enter the following information: - -- In the **Name** field, enter the text "spatial-tutorial". -- In the **Namespace URI** field, enter the URL for the spatial tutorial where this data set is used: https://www.cockroachlabs.com/docs/stable/spatial-data.html. - -Press the **Save** button. - -You will be redirected to the **Workspaces** page, and you should see a workspace called **spatial-tutorial** in the list. - -## Step 7. Configure GeoServer to use CockroachDB - -In the left-hand navigation menu, click **Data > Stores**. The **Stores** page will load. Click the **Add new Store** button. - -You will be taken to the **New data source** page. Under the list of **Vector Data Sources**, click **PostGIS**. - -This opens the **New Vector Data Source** page, where you need to enter the following information: - -1. Under **Basic Store Info**, fill in the **Data Source Name** field with the text: `CockroachDB` - -1. Under **Connection Parameters**, edit **port** to the default CockroachDB port: `26257` - -1. Edit the **database** field to add the text: `tutorial` - -1. Fill in the **user** field with the text: `root` - -1. Delete the contents of the **passwd** field, if any - -Click **Save**, and you will be redirected to the **New Layer** page, with the following unpublished layers: - -- bookstore_routes -- bookstores -- roads - -Click the **Publish** button to the right of the `roads` layer. - -This will bring you to the **Edit Layer** page, where you need to enter the following information: - -1. In the **Bounding Boxes** section, for the **Native Bounding Box** settings, click the **Compute from data** button, which will fill in the form fields. -1. Also in the **Bounding Boxes** section, for the **Lat/Lon Bounding Box** setting, click the **Compute from native bounds** button, which will fill in the form fields. - -Click **Save**, and you will be redirected to the **Layers** page. - -## Step 8. View the `roads` layer - -In the left-hand navigation menu, click **Data > Layer Preview**. - -You will be redirected to the **Layer Preview** page. - -In the row for the `roads` layer, click the **OpenLayers** button under the **Common Formats** column. - -Your browser should open a new tab with the title **OpenLayers map preview**. It should show a map view that looks like the following: - -GeoServer U.S. National Atlas preview - -## See also - -- [Install CockroachDB](install-cockroachdb.html) -- [Working with Spatial Data](spatial-data.html) -- [Spatial Features](spatial-features.html) -- [Spatial Indexes](spatial-indexes.html) -- [Spatial & GIS Glossary of Terms](spatial-glossary.html) -- [Working with Spatial Data](spatial-data.html) -- [Migrate from Shapefiles](migrate-from-shapefiles.html) -- [Migrate from GeoJSON](migrate-from-geojson.html) -- [Migrate from GeoPackage](migrate-from-geopackage.html) -- [Migrate from OpenStreetMap](migrate-from-openstreetmap.html) -- [Spatial Functions](functions-and-operators.html#spatial-functions) -- [POINT](point.html) -- [LINESTRING](linestring.html) -- [POLYGON](polygon.html) -- [MULTIPOINT](multipoint.html) -- [MULTILINESTRING](multilinestring.html) -- [MULTIPOLYGON](multipolygon.html) -- [GEOMETRYCOLLECTION](geometrycollection.html) -- [Well Known Text](well-known-text.html) -- [Well Known Binary](well-known-binary.html) -- [GeoJSON](geojson.html) -- [SRID 4326 - Longitude and Latitude](srid-4326.html) -- [`ST_Contains`](st_contains.html) -- [`ST_ConvexHull`](st_convexhull.html) -- [`ST_CoveredBy`](st_coveredby.html) -- [`ST_Covers`](st_covers.html) -- [`ST_Disjoint`](st_disjoint.html) -- [`ST_Equals`](st_equals.html) -- [`ST_Intersects`](st_intersects.html) -- [`ST_Overlaps`](st_overlaps.html) -- [`ST_Touches`](st_touches.html) -- [`ST_Union`](st_union.html) -- [`ST_Within`](st_within.html) -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Support Resources](support-resources.html) diff --git a/src/current/v22.1/get-started-with-enterprise-trial.md b/src/current/v22.1/get-started-with-enterprise-trial.md deleted file mode 100644 index 18e71f51a99..00000000000 --- a/src/current/v22.1/get-started-with-enterprise-trial.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: Enterprise Trial –– Get Started -summary: Check out this page to get started with your CockroachDB Enterprise Trial -toc: true -license: true -docs_area: ---- - -Congratulations on starting your CockroachDB Enterprise Trial! With it, you'll not only get access to CockroachDB's core capabilities like [high availability](frequently-asked-questions.html#how-does-cockroachdb-survive-failures) and [`SERIALIZABLE` isolation](frequently-asked-questions.html#how-is-cockroachdb-strongly-consistent), but also our Enterprise-only features like distributed [`BACKUP`](backup.html) & [`RESTORE`](restore.html), [multi-region capabilities](multiregion-overview.html), and [cluster visualization](enable-node-map.html). - -## Install CockroachDB - -If you haven't already, you'll need to [locally install](install-cockroachdb.html), [remotely deploy](manual-deployment.html), or [orchestrate](kubernetes-overview.html) CockroachDB. - -## Enable Enterprise features - -{% include {{ page.version.version }}/misc/set-enterprise-license.md %} - -You can then use the [`SHOW CLUSTER SETTING`](set-cluster-setting.html) command to verify your license: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CLUSTER SETTING cluster.organization; -~~~ - -## Use Enterprise features - -Your cluster now has access to all of CockroachDB's Enterprise features for the length of the trial: - -{% include {{ page.version.version }}/misc/enterprise-features.md %} - -## Getting help - -If you or your team need any help during your trial, our engineers are available on [CockroachDB Community Slack](https://cockroachdb.slack.com), [our forum](https://forum.cockroachlabs.com/), or [GitHub](https://github.com/cockroachdb/cockroach). - -Also consider checking out [Cockroach University](https://university.cockroachlabs.com/) for free online courses that help you get the most out of CockroachDB. - -## See also - -- [Licensing FAQs](licensing-faqs.html) -- [`SET CLUSTER SETTING`](set-cluster-setting.html) -- [`SHOW CLUSTER SETTING`](show-cluster-setting.html) -- [Cockroach University](https://university.cockroachlabs.com/) diff --git a/src/current/v22.1/global-tables.md b/src/current/v22.1/global-tables.md deleted file mode 100644 index d9d0e7585bb..00000000000 --- a/src/current/v22.1/global-tables.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -title: Global Tables -summary: Guidance on using global table locality in a multi-region deployment. -toc: true -docs_area: deploy ---- - -In a [multi-region deployment](multiregion-overview.html), [`GLOBAL` table locality](multiregion-overview.html#global-tables) is a good choice for tables with the following requirements: - -- Read latency must be low, but write latency can be much higher. -- Reads must be up-to-date for business reasons or because the table is referenced by [foreign keys](foreign-key.html). -- Rows in the table, and all latency-sensitive reads, **cannot** be tied to specific regions. - -In general, this pattern is suited well for reference tables that are rarely updated. - -Tables with the `GLOBAL` locality can survive zone or region failures, depending on the database-level [survival goal](multiregion-overview.html#survival-goals) setting. - -{{site.data.alerts.callout_success}} -{% include {{page.version.version}}/misc/multiregion-max-offset.md %} -{{site.data.alerts.end}} - -## Before you begin - -{% include enterprise-feature.md %} - -### Fundamentals - -{% include {{ page.version.version }}/topology-patterns/multiregion-fundamentals.md %} - -### Cluster setup - -{% include {{ page.version.version }}/topology-patterns/multi-region-cluster-setup.md %} - -## Configuration - -### Summary - -To use this pattern, set the [table locality](multiregion-overview.html#table-locality) to `GLOBAL`. - -{% include {{page.version.version}}/sql/global-table-description.md %} - -### Steps - -{% include {{page.version.version}}/topology-patterns/multiregion-db-setup.md %} - -1. Create a [`GLOBAL` table](multiregion-overview.html#global-tables) by issuing the following statement: - - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE TABLE postal_codes ( - id INT PRIMARY KEY, - code STRING - ) LOCALITY GLOBAL; - ~~~ - - Alternatively, you can set an existing table's locality to `GLOBAL` using [`ALTER TABLE ... SET LOCALITY`](set-locality.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER TABLE postal_codes SET LOCALITY GLOBAL; - ~~~ - -{{site.data.alerts.callout_success}} -A good way to check that your [table locality settings](multiregion-overview.html#table-locality) are having the expected effect is by monitoring how the performance metrics of a workload change as the settings are applied to a running cluster. For a tutorial showing how table localities can improve performance metrics across a multi-region cluster, see [Low Latency Reads and Writes in a Multi-Region Cluster](demo-low-latency-multi-region-deployment.html). -{{site.data.alerts.end}} - -## Characteristics - -### Latency - -Global tables support low-latency, global reads of read-mostly data using an extension to CockroachDB's standard transaction protocol called [non-blocking transactions](architecture/transaction-layer.html#non-blocking-transactions). - -#### Reads - -Thanks to the [non-blocking transaction](architecture/transaction-layer.html#non-blocking-transactions) protocol extension, reads against `GLOBAL` tables access a consistent local replica and therefore never leave the region. This keeps read latency low. - -#### Writes - -Writes incur higher latencies than reads, since they require a "commit-wait" step to ensure consistency. For more information about how this works, see [non-blocking transactions](architecture/transaction-layer.html#non-blocking-transactions). - -### Resiliency - -Because the `test` database does not specify a [survival goal](multiregion-overview.html#survival-goals), it uses the default [`ZONE` survival goal](multiregion-overview.html#surviving-zone-failures). With the default settings, an entire zone can fail without interrupting access to the database. - -For more information about how to choose a database survival goal, see [When to Use `ZONE` vs. `REGION` Survival Goals](when-to-use-zone-vs-region-survival-goals.html). - -## Troubleshooting - -### High follower read latency on global tables - -Reads on multi-region global tables can experience sporadic high latency on [follower reads](follower-reads.html) if the round trip time between cluster nodes is higher than 150ms. To work around this issue, consider setting the `kv.closed_timestamp.lead_for_global_reads_override` [cluster setting](cluster-settings.html) to a value greater than 800ms. - -The value of `kv.closed_timestamp.lead_for_global_reads_override` will impact write latency to global tables, so you should proceed in 100ms increments until the high read latency no longer occurs. If you've increased the setting to 1500ms and the problem persists, you should [contact support](support-resources.html). - -## Alternatives - -- If rows in the table, and all latency-sensitive queries, can be tied to specific geographies, consider the [`REGIONAL` Table Locality Pattern](regional-tables.html) pattern. - -## Tutorial - -For a step-by-step demonstration showing how CockroachDB's multi-region capabilities (including `GLOBAL` and `REGIONAL` tables) give you low-latency reads in a distributed cluster, see the tutorial on [Low Latency Reads and Writes in a Multi-Region Cluster](demo-low-latency-multi-region-deployment.html). - -## See also - -{% include {{ page.version.version }}/topology-patterns/see-also.md %} diff --git a/src/current/v22.1/grant.md b/src/current/v22.1/grant.md deleted file mode 100644 index 82e02bc1c0d..00000000000 --- a/src/current/v22.1/grant.md +++ /dev/null @@ -1,316 +0,0 @@ ---- -title: GRANT -summary: The GRANT statement grants user privileges for interacting with specific database objects and adds roles or users as a member of a role. -toc: true -docs_area: reference.sql ---- - -The `GRANT` [statement](sql-statements.html) controls each [role or user's](security-reference/authorization.html#users-and-roles) SQL privileges for interacting with specific [databases](create-database.html), [schemas](create-schema.html), [tables](create-table.html), or [user-defined types](enum.html). For privileges required by specific statements, see the documentation for the respective [SQL statement](sql-statements.html). - -You can use `GRANT` to directly grant privileges to a role or user, or you can grant membership to an existing role, which grants that role's privileges to the grantee. Users granted a privilege with `WITH GRANT OPTION` can in turn grant that privilege to others. The owner of an object implicitly has the `GRANT OPTION` for all privileges, and the `GRANT OPTION` is inherited through role memberships. - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Syntax - -
    -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/grant.html %} -
    - -### Parameters - -Parameter | Description ---------------------------|------------ -`ALL`
    `ALL PRIVILEGES` | Grant all [privileges](#supported-privileges). -`targets` | A comma-separated list of database or table names, preceded by the object type (e.g., `DATABASE mydatabase`).
    {{site.data.alerts.callout_info}}To grant privileges on all tables in a database or schema, you can use `GRANT ... ON TABLE *`. For an example, see [Grant privileges on all tables in a database or schema](#grant-privileges-on-all-tables-in-a-database-or-schema).{{site.data.alerts.end}} -`target_types` | A comma-separated list of [user-defined types](create-type.html). -`ALL TABLES IN SCHEMA` | Grant privileges on all tables in a schema or list of schemas. -`schema_name_list` | A comma-separated list of [schemas](create-schema.html). -`role_spec_list` | A comma-separated list of [roles](security-reference/authorization.html#users-and-roles). -`privilege_list` | A comma-separated list of [privileges](security-reference/authorization.html#managing-privileges) to grant. -`WITH ADMIN OPTION` | Designate the user as a role admin. Role admins can grant or [revoke](revoke.html) membership for the specified role. -`WITH GRANT OPTION` | **New in v22.1:** Allow the user to grant the specified privilege to others. - -## Supported privileges - -Roles and users can be granted the following privileges: - -{% include {{ page.version.version }}/sql/privileges.md %} - -## Required privileges - -- To grant privileges, the user granting the privileges must also have the privilege being granted on the target database or tables. For example, a user granting the `SELECT` privilege on a table to another user must have the `SELECT` privileges on that table and `WITH GRANT OPTION` on `SELECT`. - -- To grant roles, the user granting role membership must be a role admin (i.e., members with the `WITH ADMIN OPTION`) or a member of the `admin` role. To grant membership to the `admin` role, the user must have `WITH ADMIN OPTION` on the `admin` role. - -## Details - -### Granting privileges - -- When a role or user is granted privileges for a database, new tables created in the database will inherit the privileges, but the privileges can then be changed. - - {{site.data.alerts.callout_info}} - The user does not get privileges to existing tables in the database. To grant privileges to a user on all existing tables in a database, see [Grant privileges on all tables in a database](#grant-privileges-on-all-tables-in-a-database-or-schema) - {{site.data.alerts.end}} - -- When a role or user is granted privileges for a table, the privileges are limited to the table. -- The `root` user automatically belongs to the `admin` role and has the `ALL` privilege for new databases. -- For privileges required by specific statements, see the documentation for the respective [SQL statement](sql-statements.html). - -### Granting roles - -- Users and roles can be members of roles. -- The `root` user is automatically created as an `admin` role and assigned the `ALL` privilege for new databases. -- All privileges of a role are inherited by all its members. -- Membership loops are not allowed (direct: `A is a member of B is a member of A` or indirect: `A is a member of B is a member of C ... is a member of A`). - -## Examples - -{% include {{page.version.version}}/sql/movr-statements.md %} - -### Grant privileges on databases - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE USER max WITH PASSWORD roach; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT ALL ON DATABASE movr TO max WITH GRANT OPTION; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON DATABASE movr; -~~~ - -~~~ - database_name | grantee | privilege_type | is_grantable -----------------+---------+-----------------+-------------- - movr | admin | ALL | true - movr | max | ALL | true - movr | root | ALL | true -(3 rows) -~~~ - -### Grant privileges on specific tables in a database - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT DELETE ON TABLE rides TO max; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON TABLE rides; -~~~ - -~~~ - database_name | schema_name | table_name | grantee | privilege_type | is_grantable -----------------+-------------+------------+---------+-----------------+-------------- - movr | public | rides | admin | ALL | true - movr | public | rides | max | DELETE | false - movr | public | rides | root | ALL | true -(3 rows) -~~~ - -### Grant privileges on all tables in a database or schema - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT SELECT ON TABLE movr.public.* TO max; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON TABLE movr.public.*; -~~~ - -~~~ - database_name | schema_name | table_name | grantee | privilege_type | is_grantable -----------------+-------------+----------------------------+---------+-----------------+-------------- - movr | public | promo_codes | admin | ALL | true - movr | public | promo_codes | max | SELECT | false - movr | public | promo_codes | root | ALL | true - movr | public | rides | admin | ALL | true - movr | public | rides | max | DELETE | false - movr | public | rides | max | SELECT | false - movr | public | rides | root | ALL | true - movr | public | user_promo_codes | admin | ALL | true - movr | public | user_promo_codes | max | SELECT | false - movr | public | user_promo_codes | root | ALL | true - movr | public | users | admin | ALL | true - movr | public | users | max | SELECT | false - movr | public | users | root | ALL | true - movr | public | vehicle_location_histories | admin | ALL | true - movr | public | vehicle_location_histories | max | SELECT | false - movr | public | vehicle_location_histories | root | ALL | true - movr | public | vehicles | admin | ALL | true - movr | public | vehicles | max | SELECT | false - movr | public | vehicles | root | ALL | true -(19 rows) -~~~ - -### Make a table readable to every user in the system - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT SELECT ON TABLE vehicles TO public; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON TABLE vehicles; -~~~ - -~~~ - database_name | schema_name | table_name | grantee | privilege_type | is_grantable -----------------+-------------+------------+---------+-----------------+-------------- - movr | public | vehicles | admin | ALL | true - movr | public | vehicles | max | SELECT | false - movr | public | vehicles | public | SELECT | false - movr | public | vehicles | root | ALL | true -(4 rows) -~~~ - -### Grant privileges on schemas - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE SCHEMA cockroach_labs; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT ALL ON SCHEMA cockroach_labs TO max WITH GRANT OPTION; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON SCHEMA cockroach_labs; -~~~ - -~~~ - database_name | schema_name | grantee | privilege_type | is_grantable -----------------+----------------+---------+-----------------+-------------- - movr | cockroach_labs | admin | ALL | true - movr | cockroach_labs | max | ALL | true - movr | cockroach_labs | root | ALL | true -(3 rows) -~~~ - -### Grant privileges on user-defined types - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TYPE status AS ENUM ('available', 'unavailable'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT ALL ON TYPE status TO max WITH GRANT OPTION; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON TYPE status; -~~~ - -~~~ - database_name | schema_name | type_name | grantee | privilege_type | is_grantable -----------------+-------------+-----------+---------+-----------------+-------------- - movr | public | status | admin | ALL | true - movr | public | status | demo | ALL | false - movr | public | status | max | ALL | true - movr | public | status | public | USAGE | false - movr | public | status | root | ALL | true -(5 rows) -~~~ - -### Grant the privilege to manage the replication zones for a database or table - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT ZONECONFIG ON TABLE rides TO max; -~~~ - -The user `max` can then use the [`CONFIGURE ZONE`](configure-zone.html) statement to add, modify, reset, or remove replication zones for the table `rides`. - -### Grant role membership - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE ROLE developer WITH CREATEDB; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE USER abbey WITH PASSWORD lincoln; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT developer TO abbey; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON ROLE developer; -~~~ - -~~~ - role_name | member | is_admin | is_grantable -------------+--------+-----------+----------- - developer | abbey | false | false -(1 row) -~~~ - -### Grant the admin option - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT developer TO abbey WITH ADMIN OPTION; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON ROLE developer; -~~~ - -~~~ - role_name | member | is_admin | is_grantable -------------+--------+-----------+----------- - developer | abbey | true | true -(1 row) -~~~ - -### Grant privileges with the option to grant to others - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT UPDATE ON TABLE rides TO max WITH GRANT OPTION; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON TABLE rides; -~~~ - -~~~ - database_name | schema_name | table_name | grantee | privilege_type | is_grantable -----------------+-------------+------------+---------+-----------------+-------------- - movr | public | rides | admin | ALL | true - movr | public | rides | max | UPDATE | true - movr | public | rides | root | ALL | true -(3 rows) -~~~ - -## See also - -- [Authorization](authorization.html) -- [`REVOKE`](revoke.html) -- [`SHOW GRANTS`](show-grants.html) -- [`SHOW ROLES`](show-roles.html) -- [`CONFIGURE ZONE`](configure-zone.html) -- [Manage Users](security-reference/authorization.html#create-and-manage-users) diff --git a/src/current/v22.1/gssapi_authentication.md b/src/current/v22.1/gssapi_authentication.md deleted file mode 100644 index 655f8e63fd4..00000000000 --- a/src/current/v22.1/gssapi_authentication.md +++ /dev/null @@ -1,257 +0,0 @@ ---- -title: GSSAPI Authentication (Enterprise) -summary: Learn about the GSSAPI authentication features for secure CockroachDB clusters. -toc: true -docs_area: manage -keywords: authentication, ldap, kerberos, gssapi ---- - -CockroachDB supports the Generic Security Services API (GSSAPI) with Kerberos authentication. Although CockroachDB does not support communicating directly with an LDAP service, GSSAPI with Kerberos can be configured to communicate with your LDAP service to authenticate users. - -{% include enterprise-feature.md %} - -## Requirements - -- A working Active Directory or Kerberos environment -- A Service Principal -- A GSSAPI-compatible PostgreSQL Client (psql, etc.) -- A client machine with a Kerberos client installed and configured - -## Configure KDC for CockroachDB - -To use Kerberos authentication with CockroachDB, configure a Kerberos service principal name (SPN) for CockroachDB and generate a valid keytab file with the following specifications: - -- Set the SPN to the name specified by your client driver. For example, if you use the psql client, set SPN to `postgres`. -- Create SPNs for all DNS addresses that a user would use to connect to your CockroachDB cluster (including any TCP load balancers between the user and the CockroachDB node) and ensure that the keytab contains the keys for every SPN you create. - -### Active Directory - -For Active Directory, the client syntax for generating a keytab that maps a service principal to the SPN is as follows: - -{% include_cached copy-clipboard.html %} -~~~ shell -ktpass -out {keytab_filename} \ - -princ {Client_SPN}/{NODE/LB_FQDN}@{DOMAIN} \ - -mapUser {Service_Principal}@{DOMAIN} \ - -mapOp set -pType KRB5_NT_PRINCIPAL +rndPass \ - -crypto AES256-SHA1 -~~~ - -Example: - -{% include_cached copy-clipboard.html %} -~~~ shell -ktpass -out postgres.keytab \ - -princ postgres/loadbalancer1.cockroach.industries@COCKROACH.INDUSTRIES \ - -mapUser pguser@COCKROACH.INDUSTRIES \ - -mapOp set -pType KRB5_NT_PRINCIPAL +rndPass \ - -crypto AES256-SHA1 -~~~ - -Copy the resulting keytab to the database nodes. If clients are connecting to multiple addresses (more than one load balancer, or clients connecting directly to nodes), you will need to generate a keytab for each client endpoint. You may want to merge your keytabs together for easier management. You can do this using the `ktpass` command, using the following syntax: - -{% include_cached copy-clipboard.html %} -~~~ shell -ktpass -out {new_keytab_filename} \ - -in {old_keytab_filename} \ - -princ {Client_SPN}/{NODE/LB_FQDN}@{DOMAIN} \ - -mapUser {Service_Principal}@{DOMAIN} \ - -mapOp add \ - -pType KRB5_NT_PRINCIPAL +rndPass \ - -crypto AES256-SHA1 -~~~ - -Example (adds `loadbalancer2` to the above example): - -~~~ shell -ktpass -out postgres_2lb.keytab \ - -in postgres.keytab \ - -princ postgres/loadbalancer2.cockroach.industries@COCKROACH.INDUSTRIES \ - -mapUser pguser@COCKROACH.INDUSTRIES \ - -mapOp add \ - -pType KRB5_NT_PRINCIPAL +rndPass \ - -crypto AES256-SHA1 -~~~ - -### MIT KDC - -In MIT KDC, you cannot map a service principal to an SPN with a different username, so you will need to create a service principal that includes the SPN for your client. - -{% include_cached copy-clipboard.html %} -~~~ shell -create-user: kadmin.local -q "addprinc {SPN}/{CLIENT_FQDN}@{DOMAIN}" -pw "{initial_password}" -~~~ - -{% include_cached copy-clipboard.html %} -~~~ shell -create-keytab: kadmin.local -q "ktadd -k keytab {SPN}/{CLIENT_FQDN}@{DOMAIN}" -~~~ - -Example: - -~~~ shell -kadmin.local -q "addprinc postgres/client2.cockroach.industries@COCKROACH.INDUSTRIES" -pw "testing12345!" -kadmin.local -q "ktadd -k keytab postgres/client2.cockroach.industries@COCKROACH.INDUSTRIES" -~~~ - -Copy the resulting keytab to the database nodes. If clients are connecting to multiple addresses (more than one load balancer, or clients connecting directly to nodes), you will need to generate a keytab for each client endpoint. You may want to merge your keytabs together for easier management. The `ktutil` command can be used to read multiple keytab files and output them into a single output [here](https://web.mit.edu/kerberos/krb5-devel/doc/admin/admin_commands/ktutil.html). - - -## Configure the CockroachDB node - -1. Copy the keytab file to a location accessible by the `cockroach` binary. - -1. [Create certificates](cockroach-cert.html) for inter-node and `root` user authentication: - - {% include_cached copy-clipboard.html %} - ~~~ shell - mkdir certs my-safe-directory - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach cert create-ca \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach cert create-node \ - localhost \ - $(hostname) \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach cert create-client root \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -1. Provide the path to the keytab in the `KRB5_KTNAME` environment variable. - - Example: `export KRB5_KTNAME=/home/cockroach/postgres.keytab` - -1. Start a CockroachDB node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach start --certs-dir=certs --listen-addr=0.0.0.0 - ~~~ - -1. Connect to CockroachDB as `root` using the `root` client certificate generated above: - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach sql --certs-dir=certs - ~~~ - -1. [Enable an Enterprise license](licensing-faqs.html#obtain-a-license). - {{site.data.alerts.callout_info}} You need the Enterprise license if you want to use the GSSAPI feature. However, if you only want to test that the GSSAPI setup is working, you do not need to enable an Enterprise license. {{site.data.alerts.end}} - -1. Enable GSSAPI authentication: - - {% include_cached copy-clipboard.html %} - ~~~ sql - SET cluster setting server.host_based_authentication.configuration = 'host all all all gss include_realm=0'; - ~~~ - - Setting the `server.host_based_authentication.configuration` [cluster setting](cluster-settings.html) to this particular value makes it mandatory for all non-`root` users to authenticate using GSSAPI. The `root` user is always an exception and remains able to authenticate using a valid client cert or a user password. - - The `include_realm=0` option is required to tell CockroachDB to remove the `@DOMAIN.COM` realm information from the username. We do not support any advanced mapping of GSSAPI usernames to CockroachDB usernames right now. If you want to limit which realms' users can connect, you can also add one or more `krb_realm` parameters to the end of the line as an allowlist, as follows: `host all all all gss include_realm=0 krb_realm=domain.com krb_realm=corp.domain.com` - - The syntax is based on the `pg_hba.conf` standard for PostgreSQL which is documented [here](https://www.postgresql.org/docs/current/auth-pg-hba-conf.html). It can be used to exclude other users from Kerberos authentication. - -1. Create CockroachDB users for every Kerberos user. Ensure the username does not have the `DOMAIN.COM` realm information. For example, if one of your Kerberos users has a username `carl@realm.com`, then you need to create a CockroachDB user with the username `carl`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE USER carl; - ~~~ - - Grant privileges to the user: - - {% include_cached copy-clipboard.html %} - ~~~ sql - GRANT ALL ON DATABASE defaultdb TO carl; - ~~~ - -## Configure the client - -1. Install and configure your Kerberos client: - - For CentOS/RHEL systems, run: - - {% include_cached copy-clipboard.html %} - ~~~ shell - yum install krb5-user - ~~~ - - For Ubuntu/Debian systems, run: - - {% include_cached copy-clipboard.html %} - ~~~ shell - apt-get install krb5-user - ~~~ - - Edit the `/etc/krb5.conf` file to include: - - {% include_cached copy-clipboard.html %} - ~~~ - [libdefaults] - default_realm = {REALM} - - [realms] - {REALM} = { - kdc = {fqdn-kdc-server or ad-server} - admin_server = {fqdn-kdc-server or ad-server} - default_domain = {realm-lower-case} - } - ~~~ - - Example: - - {% include_cached copy-clipboard.html %} - ~~~ - - [libdefaults] - default_realm = COCKROACH.INDUSTRIES - - [realms] - COCKROACH.INDUSTRIES = { - kdc = ad.cockroach.industries - admin_server = ad.cockroach.industries - default_domain = cockroach.industries - } - ~~~ - -1. Get a ticket for the db user: - - {% include_cached copy-clipboard.html %} - ~~~ shell - kinit carl - ~~~ - -1. Verify if a valid ticket has been generated: - - {% include_cached copy-clipboard.html %} - ~~~ shell - klist - ~~~ - -1. Connect to the cluster using the `cockroach sql` command as the Kerberos user: - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach sql --certs-dir=certs -U carl - ~~~ - -1. If you specified an Enterprise license earlier, the command succeeds. This indicates that the GSSAPI authentication was successful. Otherwise, the error `ERROR: use of GSS authentication requires an Enterprise license` is shown. - -## See also - -- [Authentication](authentication.html) -- [Create Security Certificates](cockroach-cert.html) diff --git a/src/current/v22.1/hash-sharded-indexes.md b/src/current/v22.1/hash-sharded-indexes.md deleted file mode 100644 index 4d77c5c2336..00000000000 --- a/src/current/v22.1/hash-sharded-indexes.md +++ /dev/null @@ -1,170 +0,0 @@ ---- -title: Hash-sharded Indexes -summary: Hash-sharded indexes can eliminate single-range hot spots and improve write performance on sequentially-keyed indexes at a small cost to read performance -toc: true -docs_area: develop ---- - -If you are working with a table that must be indexed on sequential keys, you should use **hash-sharded indexes**. Hash-sharded indexes distribute sequential traffic uniformly across ranges, eliminating single-range hot spots and improving write performance on sequentially-keyed indexes at a small cost to read performance. - -{{site.data.alerts.callout_info}} -Hash-sharded indexes are an implementation of hash partitioning, not hash indexing. -{{site.data.alerts.end}} - -## How hash-sharded indexes work - -### Overview - -CockroachDB automatically splits ranges of data in [the key-value store](architecture/storage-layer.html) based on [the size of the range](architecture/distribution-layer.html#range-splits) and on [the load streaming to the range](load-based-splitting.html). To split a range based on load, the system looks for a point in the range that evenly divides incoming traffic. If the range is indexed on a column of data that is sequential in nature (e.g., [an ordered sequence](sql-faqs.html#what-are-the-differences-between-uuid-sequences-and-unique_rowid) or a series of increasing, non-repeating [`TIMESTAMP`s](timestamp.html)), then all incoming writes to the range will be the last (or first) item in the index and appended to the end of the range. As a result, the system cannot find a point in the range that evenly divides the traffic, and the range cannot benefit from load-based splitting, creating a [hot spot](performance-best-practices-overview.html#hot-spots) on the single range. - -Hash-sharded indexes solve this problem by distributing sequential data across multiple nodes within your cluster, eliminating hotspots. The trade-off to this, however, is a small performance impact on reading sequential data or ranges of data, as it's not guaranteed that sequentially close values will be on the same node. - -Hash-sharded indexes contain a [virtual computed column](computed-columns.html#virtual-computed-columns), known as a shard column. CockroachDB uses this shard column, as opposed to the sequential column in the index, to control the distribution of values across the index. The shard column is hidden by default but can be seen with [`SHOW COLUMNS`](show-columns.html). - -{{site.data.alerts.callout_danger}} -In v21.2 and earlier, hash-sharded indexes create a physical `STORED` [computed column](computed-columns.html) instead of a virtual computed column. If you are using a hash-sharded index that was created in v21.2 or earlier, the `STORED` column still exists in your database. When dropping a hash-sharded index that has created a physical shard column, the shard column will also be dropped. This will require a rewrite of the table. -{{site.data.alerts.end}} - -For details about the mechanics and performance improvements of hash-sharded indexes in CockroachDB, see our [Hash Sharded Indexes Unlock Linear Scaling for Sequential Workloads](https://www.cockroachlabs.com/blog/hash-sharded-indexes-unlock-linear-scaling-for-sequential-workloads/) blog post. - -{{site.data.alerts.callout_info}} -Hash-sharded indexes created in v22.1 and later will not [backfill](changefeed-messages.html#schema-changes-with-column-backfill), as the shard column isn't stored. Hash-sharded indexes created prior to v22.1 will backfill if `schema_change_policy` is set to `backfill`, as they use a stored column. If you don't want CockroachDB to backfill hash-sharded indexes you created prior to v22.1, drop them and recreate them. -{{site.data.alerts.end}} - -### Shard count - -When creating a hash-sharded index, CockroachDB creates a specified number of shards (buckets) within the cluster based on the value of the `sql.defaults.default_hash_sharded_index_bucket_count` [cluster setting](cluster-settings.html). You can also specify a different `bucket_count` by passing in an optional storage parameter. See the example below. - -For most use cases, no changes to the cluster setting are needed, and hash-sharded indexes can be created with `USING HASH` instead of `USING HASH WITH (bucket_count = n)`. Changing the cluster setting or storage parameter to a number greater than the number of nodes within that cluster will produce diminishing returns and is not recommended. - -A larger number of buckets allows for greater load-balancing and thus greater write throughput. More buckets disadvantages operations that need to scan over the data to fulfill their query; such queries will now need to scan over each bucket and combine the results. - -We recommend doing thorough performance testing of your workload with different `bucket_count`s if the default `bucket_count` does not satisfy your use case. - -### Hash-sharded indexes on partitioned tables - -You can create hash-sharded indexes with implicit partitioning under the following scenarios: - -- The table is partitioned implicitly with [`REGIONAL BY ROW`](multiregion-overview.html#regional-by-row-tables), and the `crdb_region` column is not part of the columns in the hash-sharded index. -- The table is partitioned implicitly with `PARTITION ALL BY`, and the partition columns are not part of the columns in the hash-sharded index. Note that `PARTITION ALL BY` is in preview. - -However, if an index of a table, whether it be a primary key or secondary index, is explicitly partitioned with `PARTITION BY`, then that index cannot be hash-sharded. Partitioning columns cannot be placed explicitly as key columns of a hash-sharded index as well, including `REGIONAL BY ROW` table's `crdb_region` column. - -## Create a hash-sharded index - -The general process of creating a hash-sharded index is to add the `USING HASH` clause to one of the following statements: - -- [`CREATE INDEX`](create-index.html) -- [`CREATE TABLE`](create-table.html) -- [`ALTER PRIMARY KEY`](alter-primary-key.html) - -When this clause is used, CockroachDB creates a computed shard column and then stores each index shard in the underlying key-value store with one of the computed column's hash as its prefix. - -## Examples - -### Create a table with a hash-sharded primary key - -{% include {{page.version.version}}/performance/create-table-hash-sharded-primary-index.md %} - -### Create a table with a hash-sharded secondary index - -{% include {{page.version.version}}/performance/create-table-hash-sharded-secondary-index.md %} - -### Create a hash-sharded secondary index on an existing table - -{% include {{page.version.version}}/performance/create-index-hash-sharded-secondary-index.md %} - -### Alter an existing primary key to use hash sharding - -{% include {{page.version.version}}/performance/alter-primary-key-hash-sharded.md %} - -### Show hash-sharded index in `SHOW CREATE TABLE` - -Following the above [example](#create-a-hash-sharded-secondary-index-on-an-existing-table), you can show the hash-sharded index definition along with the table creation statement using `SHOW CREATE TABLE`: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE events; -~~~ - -~~~ - table_name | create_statement --------------+--------------------------------------------------------------------------------------------------------------------------------- - events | CREATE TABLE public.events ( - | product_id INT8 NOT NULL, - | owner UUID NOT NULL, - | serial_number VARCHAR NOT NULL, - | event_id UUID NOT NULL, - | ts TIMESTAMP NOT NULL, - | data JSONB NULL, - | crdb_internal_ts_shard_16 INT8 NOT VISIBLE NOT NULL AS (mod(fnv32(crdb_internal.datums_to_bytes(ts)), 16:::INT8)) VIRTUAL, - | CONSTRAINT events_pkey PRIMARY KEY (product_id ASC, owner ASC, serial_number ASC, ts ASC, event_id ASC), - | INDEX events_ts_idx (ts ASC) USING HASH WITH (bucket_count=16) - | ) -(1 row) -~~~ - -### Create a hash-sharded secondary index with a different `bucket_count` - -You can specify a different `bucket_count` via a storage parameter on a hash-sharded index to optimize either write performance or sequential read performance on a table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE events ( - product_id INT8, - owner UUID, - serial_number VARCHAR, - event_id UUID, - ts TIMESTAMP, - data JSONB, - PRIMARY KEY (product_id, owner, serial_number, ts, event_id), - INDEX (ts) USING HASH WITH (bucket_count = 20) -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW INDEX FROM events; -~~~ - -~~~ - table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit --------------+---------------+------------+--------------+---------------------------+-----------+---------+----------- - events | events_pkey | false | 1 | product_id | ASC | false | false - events | events_pkey | false | 2 | owner | ASC | false | false - events | events_pkey | false | 3 | serial_number | ASC | false | false - events | events_pkey | false | 4 | ts | ASC | false | false - events | events_pkey | false | 5 | event_id | ASC | false | false - events | events_pkey | false | 6 | data | N/A | true | false - events | events_ts_idx | true | 1 | crdb_internal_ts_shard_20 | ASC | false | true - events | events_ts_idx | true | 2 | ts | ASC | false | false - events | events_ts_idx | true | 3 | product_id | ASC | false | true - events | events_ts_idx | true | 4 | owner | ASC | false | true - events | events_ts_idx | true | 5 | serial_number | ASC | false | true - events | events_ts_idx | true | 6 | event_id | ASC | false | true -(12 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM events; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden -----------------------------+-----------+-------------+----------------+---------------------------------------------------+-----------------------------+------------ - product_id | INT8 | false | NULL | | {events_pkey,events_ts_idx} | false - owner | UUID | false | NULL | | {events_pkey,events_ts_idx} | false - serial_number | VARCHAR | false | NULL | | {events_pkey,events_ts_idx} | false - event_id | UUID | false | NULL | | {events_pkey,events_ts_idx} | false - ts | TIMESTAMP | false | NULL | | {events_pkey,events_ts_idx} | false - data | JSONB | true | NULL | | {events_pkey} | false - crdb_internal_ts_shard_20 | INT8 | false | NULL | mod(fnv32(crdb_internal.datums_to_bytes(ts)), 20) | {events_ts_idx} | true -(7 rows) -~~~ - - -## See also - -- [Indexes](indexes.html) -- [`CREATE INDEX`](create-index.html) diff --git a/src/current/v22.1/hashicorp-integration.md b/src/current/v22.1/hashicorp-integration.md deleted file mode 100644 index ee96f3cb254..00000000000 --- a/src/current/v22.1/hashicorp-integration.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -title: CockroachDB - HashiCorp Vault Integration -summary: Overview of uses cases for integrating CockroachDB with HashiCorp Vault -toc: true -docs_area: reference.third_party_support ---- - -This pages reviews the supported integrations between CockroachDB and [HashiCorp's Vault](https://www.vaultproject.io/). - -Vault is an identity-based secrets and encryption management service, which can either be self-hosted or accessed as a software as a service (SaaS) product through HashiCorp Cloud Platform (HCP). Vault's tooling can complement CockroachDB's data security capabilities to significantly bolster your organizational security posture. - -## Use Vault's KMS secrets engine to manage a CockroachDB {{ site.data.products.advanced }} cluster's customer-managed encryption key - -CockroachDB {{ site.data.products.advanced }} supports the use of customer-managed encrypted keys (CMEK) for the encryption of data at rest. - -[Vault's Key Management secrets engine](https://www.vaultproject.io/docs/secrets/key-management) allows customers to manage encryption keys on external key management services (KMS) such as those offered by Google Cloud Platform (GCP) or Amazon Web Services (AWS). - -CockroachDB customers can integrate these services, using Vault's KMS secrets engine to handle the full lifecycle of the encryption keys that CockroachDB {{ site.data.products.advanced }} uses to protect their data. - -Resources: - -- [CMEK overview]({% link cockroachcloud/cmek.md %}) -- [Manage Customer-Managed Encryption Keys (CMEK) for CockroachDB Advanced]({% link cockroachcloud/managing-cmek.md %}) - -## Use Vault's PKI Secrets Engine to manage a CockroachDB {{ site.data.products.advanced }} cluster's certificate authority (CA) and client certificates. - -CockroachDB {{ site.data.products.advanced }} customers can use Vault's public key infrastructure (PKI) secrets engine to manage PKI certificates for client authentication to the cluster. Vault's PKI Secrets Engine greatly eases the security-critical work involved in maintaining a certificate authority (CA), generating, signing and distributing PKI certificates. - -By using Vault to manage certificates, you can use only certificates with short validity durations, an important component of PKI security. - -Refer to [Transport Layer Security (TLS) and Public Key Infrastructure (PKI)]({% link {{ page.version.version }}/security-reference/transport-layer-security.md %}) for an overview. - -Refer to [Certificate Authentication for SQL Clients in CockroachDB Advanced Clusters]({% link cockroachcloud/client-certs-advanced.md %}) for procedures in involved in administering PKI for a CockroachDB {{ site.data.products.advanced }} cluster. - -## Use Vault's PKI Secrets Engine to manage a CockroachDB {{ site.data.products.core }} cluster's certificate authority (CA), server, and client certificates - -CockroachDB {{ site.data.products.core }} customers can use Vault's public key infrastructure (PKI) secrets engine to manage PKI certificates for internode as well as client-cluster authentication. Vault's PKI Secrets Engine greatly eases the security-critical work involved in securely maintaining a certificate authority (CA), generating, signing and distributing PKI certificates. - -By using Vault to manage certificates, you can use only certificates with short validity durations, an important component of PKI security. - -Refer to [Transport Layer Security (TLS) and Public Key Infrastructure (PKI)]({% link {{ page.version.version }}/security-reference/transport-layer-security.md %}) for an overview. - -Refer to [Manage PKI certificates for a CockroachDB deployment with HashiCorp Vault]({% link {{ page.version.version }}/manage-certs-vault.md %}) for procedures in involved in administering PKI for a CockroachDB {{ site.data.products.core }} cluster. - -## Use Vault's PostgreSQL Database Secrets Engine to manage CockroachDB SQL users and their credentials - -CockroachDB users can use Vault's PostgreSQL Database Secrets Engine to handle the full lifecycle of SQL user credentials (creation, password rotation, deletion). Vault is capable of managing SQL user credentials in two ways: - -- As [Static Roles](https://www.vaultproject.io/docs/secrets/databases#static-roles), meaning that a single SQL user/role is mapped to a Vault role. - -- As [Dynamic Secrets](https://www.vaultproject.io/use-cases/dynamic-secrets), meaning that credentials are generated and issued on demand from pre-configured templates, rather than created and persisted. Credentials are issued for specific clients and for short validity durations, further minimizing both the likelihood of a credential compromise, and the possible impact of any compromise that might occur. - -Try the tutorial: [Using HashiCorp Vault's Dynamic Secrets for Enhanced Database Credential Security in CockroachDB]({% link {{ page.version.version }}/vault-db-secrets-tutorial.md %}) - -## Use Vault's Transit Secrets Engine to manage a CockroachDB {{ site.data.products.core }} cluster's {{ site.data.products.enterprise }} Encryption At Rest store key - -When deploying {{ site.data.products.enterprise }}, customers can provide their own externally managed encryption keys for use as the *store key* for CockroachDB's [{{ site.data.products.enterprise }} Encryption At Rest]({% link {{ page.version.version }}/security-reference/encryption.md %}#encryption-at-rest-enterprise). - -Vault's [Transit Secrets Engine](https://www.vaultproject.io/docs/secrets/transit) can be used to generate suitable encryption keys for use as your cluster's store key. - -## See also - -- [CMEK overview]({% link cockroachcloud/cmek.md %}) -- [Manage Customer-Managed Encryption Keys (CMEK) for CockroachDB Advanced]({% link cockroachcloud/managing-cmek.md %}) -- [Transport Layer Security (TLS) and Public Key Infrastructure (PKI)]({% link {{ page.version.version }}/security-reference/transport-layer-security.md %}) -- [Certificate Authentication for SQL Clients in Advanced Clusters]({% link cockroachcloud/client-certs-advanced.md %}) -- [Manage PKI certificates for a CockroachDB deployment with HashiCorp Vault]({% link {{ page.version.version }}/manage-certs-vault.md %}) -- [Using HashiCorp Vault's Dynamic Secrets for Enhanced Database Credential Security in CockroachDB]({% link {{ page.version.version }}/vault-db-secrets-tutorial.md %}) -- [Roles]({% link {{ page.version.version }}/security-reference/authorization.md %}#roles) -- [Online Schema Changes]({% link {{ page.version.version }}/online-schema-changes.md %}) -- [`GRANT`]({% link {{ page.version.version }}/grant.md %}) -- [`REVOKE`]({% link {{ page.version.version }}/revoke.md %}) diff --git a/src/current/v22.1/import-into.md b/src/current/v22.1/import-into.md deleted file mode 100644 index 2d4227f8358..00000000000 --- a/src/current/v22.1/import-into.md +++ /dev/null @@ -1,312 +0,0 @@ ---- -title: IMPORT INTO -summary: Import CSV data into an existing CockroachDB table. -toc: true -docs_area: reference.sql ---- - -The `IMPORT INTO` [statement](sql-statements.html) imports CSV, Avro, or delimited data into an [existing table](create-table.html), by appending new rows into the table. - -## Considerations - -- `IMPORT INTO` works for existing tables. To import data into new tables, read the following [Import into a new table from a CSV file](#import-into-a-new-table-from-a-csv-file) example. -- `IMPORT INTO` takes the table **offline** before importing the data. The table will be online again once the job has completed successfully. -- `IMPORT INTO` cannot be used during a [rolling upgrade](upgrade-cockroach-version.html). -- `IMPORT INTO` is a blocking statement. To run an `IMPORT INTO` job asynchronously, use the [`DETACHED`](#options-detached) option. -- `IMPORT INTO` invalidates all [foreign keys](foreign-key.html) on the target table. To validate the foreign key(s), use the [`VALIDATE CONSTRAINT`](validate-constraint.html) statement. -- `IMPORT INTO` is an insert-only statement; it cannot be used to update existing rows—see [`UPDATE`](update.html). Imported rows cannot conflict with primary keys in the existing table, or any other [`UNIQUE`](unique.html) constraint on the table. -- `IMPORT INTO` does not offer `SELECT` or `WHERE` clauses to specify subsets of rows. To do this, use [`INSERT`](insert.html#insert-from-a-select-statement). -- `IMPORT INTO` will cause any [changefeeds](change-data-capture-overview.html) running on the targeted table to fail. -- See the [`IMPORT`](import.html) page for guidance on importing PostgreSQL and MySQL dump files. - - `IMPORT INTO` now supports importing into [`REGIONAL BY ROW`](set-locality.html#regional-by-row) tables. - -{{site.data.alerts.callout_info}} -Optimize import operations in your applications by following our [Import Performance Best Practices](import-performance-best-practices.html). -{{site.data.alerts.end}} - -## Required privileges - -#### Table privileges - -The user must have the `INSERT` and `DROP` [privileges](security-reference/authorization.html#managing-privileges) on the specified table. (`DROP` is required because the table is taken offline during the `IMPORT INTO`.) - -#### Source privileges - -The source file URL does _not_ require the `ADMIN` role in the following scenarios: - -- S3 and GS using `SPECIFIED` (and not `IMPLICIT`) credentials. Azure is always `SPECIFIED` by default. -- [Userfile](use-userfile-for-bulk-operations.html) - -The source file URL _does_ require the `ADMIN` role in the following scenarios: - -- S3 or GS using `IMPLICIT` credentials -- Use of a [custom endpoint](https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/) on S3 -- [Nodelocal](cockroach-nodelocal-upload.html), [HTTP](use-a-local-file-server-for-bulk-operations.html), or [HTTPS](use-a-local-file-server-for-bulk-operations.html) - -{% include {{ page.version.version }}/misc/s3-compatible-warning.md %} - -Learn more about [cloud storage for bulk operations](use-cloud-storage-for-bulk-operations.html). - -## Synopsis - -
    -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/import_into.html %} -
    - -{{site.data.alerts.callout_info}} -While importing into an existing table, the table is taken offline. -{{site.data.alerts.end}} - -## Parameters - -Parameter | Description -----------|------------ -`table_name` | The name of the table you want to import into. -`column_name` | The table columns you want to import.

    Note: Currently, target columns are not enforced. -`file_location` | The [URL](#import-file-location) of a CSV or Avro file containing the table data. This can be a comma-separated list of URLs. For an example, see [Import into an existing table from multiple CSV files](#import-into-an-existing-table-from-multiple-csv-files) below. -`